AI development is making significant strides, particularly with the rise of Large Language Models (LLMs) and Retrieval-Augmented Generation (RAG) applications. As developers strive to create more robust and reliable AI systems, tools that facilitate evaluation and monitoring have become essential. One such tool is Opik, an open-source platform designed to streamline the evaluation, testing, and monitoring of LLM applications. This article will evaluate and monitor LLM & RAG Applications with Opik.
Opik is an open-source LLM evaluation and monitoring platform by Comet. It allows you to log, review, and evaluate your LLM traces in development and production. You can also use the platform and our LLM as Judge evaluators to identify and fix issues with your LLM application.
Evaluating LLMs and RAG systems goes beyond testing for accuracy. It includes factors like answer relevancy, correctness, context precision, and avoiding hallucinations. Tools like Opik and Ragas allow teams to:
Here are the key features of Opik:
Opik is designed to integrate with LLM systems like OpenAI’s GPT models seamlessly. This allows you to log traces, evaluate results, and monitor performance through every pipeline step. Here’s how to begin.
You can install Opik using pip:
!pip install --upgrade --quiet opik openai
import opik
opik.configure(use_local=False)
import os
import getpass
if "OPENAI_API_KEY" not in os.environ:
os.environ["OPENAI_API_KEY"] = getpass.getpass("Enter your OpenAI API key: ")
Opik integrates with OpenAI to provide a simple way to log traces for all OpenAI LLM calls.
Comet provides a hosted version of the Opik platform. You can create an account and grab your API Key.
from opik.integrations.openai import track_openai
from openai import OpenAI
os.environ["OPIK_PROJECT_NAME"] = "openai-integration-demo"
client = OpenAI()
openai_client = track_openai(client)
prompt = """
Write a short two sentence story about Opik.
"""
completion = openai_client.chat.completions.create(
model="gpt-3.5-turbo",
messages=[
{"role": "user", "content": prompt}
]
)
print(completion.choices[0].message.content)
In order to log traces to Opik, we need to wrap our OpenAI calls with the track_openai function.
This example shows how to set up an OpenAI client wrapped by Opik for trace logging and create a chat completion request with a simple prompt.
The prompt and response messages are automatically logged to OPik and can be viewed in the UI.
from opik import track
from opik.integrations.openai import track_openai
from openai import OpenAI
os.environ["OPIK_PROJECT_NAME"] = "openai-integration-demo"
client = OpenAI()
openai_client = track_openai(client)
@track
def generate_story(prompt):
res = openai_client.chat.completions.create(
model="gpt-3.5-turbo",
messages=[
{"role": "user", "content": prompt}
]
)
return res.choices[0].message.content
@track
def generate_topic():
prompt = "Generate a topic for a story about Opik."
res = openai_client.chat.completions.create(
model="gpt-3.5-turbo",
messages=[
{"role": "user", "content": prompt}
]
)
return res.choices[0].message.content
@track
def generate_opik_story():
topic = generate_topic()
story = generate_story(topic)
return story
generate_opik_story()
If you have multiple steps in your LLM pipeline, you can use the track decorator to log the traces for each step.
If OpenAI is called within one of these steps, the LLM call will be associated with that corresponding step.
This example demonstrates how to log traces for multiple steps in a process using the @track decorator, capturing the flow from topic generation to story generation.
!pip install --quiet --upgrade opik ragas
import opik
opik.configure(use_local=False)
Example for setting an API key:
import os
import getpass
if "OPENAI_API_KEY" not in os.environ:
os.environ["OPENAI_API_KEY"] = getpass.getpass("Enter your OpenAI API key: ")
Ragas provides a set of metrics that can be used to evaluate the quality of a RAG pipeline, including but not limited to: answer_relevancy ,answer_similarity , answer_correctness ,context_precision context_recall,context_entity_recall ,summarization_score .
You can find a full list of metrics in the Ragas documentation.
These metrics can be computed on the fly and logged to traces or spans in Opik. For this example, we will start by creating a simple RAG pipeline and then scoring it using the answer_relevancy metric.
# Import the metric
from ragas.metrics import AnswerRelevancy
# Import some additional dependencies
from langchain_openai.chat_models import ChatOpenAI
from langchain_openai.embeddings import OpenAIEmbeddings
from ragas.llms import LangchainLLMWrapper
from ragas.embeddings import LangchainEmbeddingsWrapper
# Initialize the Ragas metric
llm = LangchainLLMWrapper(ChatOpenAI())
emb = LangchainEmbeddingsWrapper(OpenAIEmbeddings())
answer_relevancy_metric = AnswerRelevancy(llm=llm, embeddings=emb)
To use the Ragas metric without using the evaluate function, you need to initialize it with a RunConfig object and an LLM provider. For this example, we will use LangChain as the LLM provider with the Opik tracer enabled.
We will first start by initializing the Ragas metric.
# Run this cell first if you are running this in a Jupyter notebook
import nest_asyncio
nest_asyncio.apply()
import asyncio
from ragas.integrations.opik import OpikTracer
from ragas.dataset_schema import SingleTurnSample
import os
os.environ["OPIK_PROJECT_NAME"] = "ragas-integration"
# Define the scoring function
def compute_metric(metric, row):
row = SingleTurnSample(**row)
opik_tracer = OpikTracer(tags=["ragas"])
async def get_score(opik_tracer, metric, row):
score = await metric.single_turn_ascore(row, callbacks=[opik_tracer])
return score
# Run the async function using the current event loop
loop = asyncio.get_event_loop()
result = loop.run_until_complete(get_score(opik_tracer, metric, row))
return result
# Score a simple example
row = {
"user_input": "What is the capital of France?",
"response": "Paris",
"retrieved_contexts": ["Paris is the capital of France.", "Paris is in France."],
}
score = compute_metric(answer_relevancy_metric, row)
print("Answer Relevancy score:", score)
If you now navigate to Opik, you will be able to see that a new trace has been created in the Default Project project.
You can use the update_current_trace function to score traces.
This method has the benefit of adding the scoring span to the trace, enabling a more in-depth examination of the RAG process. However, because it calculates the Ragas metric synchronously, it might not be appropriate for use in production scenarios.
from opik import track, opik_context
@track
def retrieve_contexts(question):
# Define the retrieval function, in this case we will hard code the contexts
return ["Paris is the capital of France.", "Paris is in France."]
@track
def answer_question(question, contexts):
# Define the answer function, in this case we will hard code the answer
return "Paris"
@track(name="Compute Ragas metric score", capture_input=False)
def compute_rag_score(answer_relevancy_metric, question, answer, contexts):
# Define the score function
row = {"user_input": question, "response": answer, "retrieved_contexts": contexts}
score = compute_metric(answer_relevancy_metric, row)
return score
@track
def rag_pipeline(question):
# Define the pipeline
contexts = retrieve_contexts(question)
answer = answer_question(question, contexts)
score = compute_rag_score(answer_relevancy_metric, question, answer, contexts)
opik_context.update_current_trace(
feedback_scores=[{"name": "answer_relevancy", "value": round(score, 4)}]
)
return answer
rag_pipeline("What is the capital of France?")
from datasets import load_dataset
from ragas.metrics import context_precision, answer_relevancy, faithfulness
from ragas import evaluate
from ragas.integrations.opik import OpikTracer
fiqa_eval = load_dataset("explodinggradients/fiqa", "ragas_eval")
# Reformat the dataset to match the schema expected by the Ragas evaluate function
dataset = fiqa_eval["baseline"].select(range(3))
dataset = dataset.map(
lambda x: {
"user_input": x["question"],
"reference": x["ground_truths"][0],
"retrieved_contexts": x["contexts"],
}
)
opik_tracer_eval = OpikTracer(tags=["ragas_eval"], metadata={"evaluation_run": True})
result = evaluate(
dataset,
metrics=[context_precision, faithfulness, answer_relevancy],
callbacks=[opik_tracer_eval],
)
print(result)
If you want to assess a dataset, you can use Raga’s evaluate function. When this function is invoked, the Ragas library computes the metrics for every row in the dataset and returns a summary of the results.
Use the OpikTracer callback to log the evaluation results to the Opik platform:
Evaluating your LLM application allows you to have confidence in its performance. This evaluation set is often performed both during the development and as part of the testing of an application.
The evaluation is done in five steps:
from opik import track
from opik.integrations.openai import track_openai
import openai
openai_client = track_openai(openai.OpenAI())
# This method is the LLM application that you want to evaluate
# Typically, this is not updated when creating evaluations
@track
def your_llm_application(input: str) -> str:
response = openai_client.chat.completions.create(
model="gpt-3.5-turbo",
messages=[{"role": "user", "content": input}],
)
return response.choices[0].message.content
@track
def your_context_retriever(input: str) -> str:
return ["..."]
This ensures that responses from the model and context retrieval processes are tracked during evaluation.
def evaluation_task(x: DatasetItem):
return {
"input": x.input['user_question'],
"output": your_llm_application(x.input['user_question']),
"context": your_context_retriever(x.input['user_question'])
}
This method is used to structure the evaluation data for further analysis.
If you have already created a dataset:
You can use the Opik.get_dataset function to fetch it:
from opik import Opik
client = Opik()
dataset = client.get_dataset(name="your-dataset-name")
If you don’t have a dataset yet:
You can create one using the Opik.create_dataset function:
from opik import Opik
from opik.datasets import DatasetItem
client = Opik()
dataset = client.create_dataset(name="your-dataset-name")
dataset.insert([
DatasetItem(input="Hello, world!", expected_output="Hello, world!"),
DatasetItem(input="What is the capital of France?", expected_output="Paris"),
])
In the same evaluation experiment, you can use multiple metrics to evaluate your application:
from opik.evaluation.metrics import Equals, Hallucination
equals_metric= Equals()
hallucination_metric=Hallucination()
Opik provides a set of built-in evaluation metrics that you can choose from. These are broken down into two main categories:
evaluation= evaluate(experiment_name=”My experiment”,dataset=dataset,task=evaluation_task,scoring_metrics=[hallucination_metric],experiment_config={”model”: Model})
Now that we have the task we want to evaluate, the dataset to evaluate on, the metrics we want to evaluate with, we can run the evaluation.
Opik represents a significant advancement in the tools available for evaluating and monitoring LLM applications. Developers can confidently build trustworthy AI systems by offering comprehensive features for tracing, evaluating, and debugging LLMs within a user-friendly framework. As AI technology advances, tools like Opik will be critical in ensuring these systems operate effectively and reliably in real-world applications.
Also, if you are looking for a Generative AI course online then explore: GenAI Pinnacle Program
Ans. Opik is an open-source platform developed by Comet to evaluate and monitor LLM (Large Language Model) applications. It helps developers log, trace, and evaluate LLMs to identify and fix issues in both development and production environments.
Ans. Evaluating LLMs and RAG (Retrieval-Augmented Generation) systems ensures more than just accuracy. It covers answer relevancy, context precision, and avoidance of hallucinations, which helps track performance, detect issues, and improve output quality.
Ans. Opik offers features such as end-to-end LLM evaluation, real-time monitoring, seamless integration with testing frameworks like Pytest, and a user-friendly interface, supporting both Python SDK and graphical interaction.
Ans. Opik allows you to log traces for OpenAI LLM calls by wrapping them with the track_openai function. This logs each interaction for deeper analysis and debugging of LLM behavior, providing insights into how models respond to different prompts.
Ans. Opik integrates with Ragas, allowing users to evaluate and monitor RAG systems. Metrics such as answer relevancy and context precision can be computed on the fly and logged into Opik, helping to trace and improve RAG system performance.