We use cookies essential for this site to function well. Please click to help us improve its usefulness with additional cookies. Learn about our use of cookies in our Privacy Policy & Cookies Policy.

Show details

Evaluating and Monitoring LLM & RAG Applications with Opik

Janvi Kumari 09 Oct, 2024
10 min read

Introduction

AI development is making significant strides, particularly with the rise of Large Language Models (LLMs) and Retrieval-Augmented Generation (RAG) applications. As developers strive to create more robust and reliable AI systems, tools that facilitate evaluation and monitoring have become essential. One such tool is Opik, an open-source platform designed to streamline the evaluation, testing, and monitoring of LLM applications. This article will evaluate and monitor LLM & RAG Applications with Opik.

Overview

  1. Opik is an open-source platform for evaluating and monitoring LLM applications developed by Comet.
  2. It enables logging and tracing of LLM interactions, helping developers identify and fix issues in real time.
  3. Evaluating LLMs is crucial for ensuring accuracy, relevancy and avoiding hallucinations in model outputs.
  4. Opik supports integration with frameworks like Pytest, making it easier to run reusable evaluation pipelines.
  5. The platform offers both Python SDK and a user interface, catering to a wide range of user preferences.
  6. Opik can be used with Ragas to monitor and evaluate RAG systems by computing metrics like answer relevancy and context precision.

What is Opik?

Opik is an open-source LLM evaluation and monitoring platform by Comet. It allows you to log, review, and evaluate your LLM traces in development and production. You can also use the platform and our LLM as Judge evaluators to identify and fix issues with your LLM application.

opik by comet
Source: Opik GitHub

Why Evaluation is Important?

Evaluating LLMs and RAG systems goes beyond testing for accuracy. It includes factors like answer relevancy, correctness, context precision, and avoiding hallucinations. Tools like Opik and Ragas allow teams to:

  • Track LLM performance in real-time, identifying bottlenecks and areas where the system may generate incorrect or irrelevant outputs.
  • Evaluate RAG pipelines, ensuring that the retrieval system provides accurate, relevant, and complete information for the tasks at hand.
Opik
Source

Key Features of Opik

Here are the key features of Opik:

1. End-to-End LLM Evaluation

  • Opik automatically traces the entire LLM pipeline, providing insights into each component of the application. This capability is crucial for debugging and understanding how different parts of the system interact1.
  • It supports complex evaluations out-of-the-box, allowing developers to implement metrics that assess model performance quickly.

2. Real-Time Monitoring

  • The platform enables real-time monitoring of LLM applications, which helps in identifying unintended behaviors and performance issues as they occur.
  • Developers can log interactions with their LLM applications and review these logs to improve understanding and performance continuously24.

3. Integration with Testing Frameworks

  • Opik integrates seamlessly with popular testing frameworks like Pytest, allowing for “model unit tests.” This feature facilitates the creation of reusable evaluation pipelines that can be applied across various applications.
  • Developers can store evaluation datasets within the platform and run assessments using built-in metrics for hallucination detection and other important measures.

4. User-Friendly Interface

  • The platform offers both a Python SDK for developers who prefer coding and a user interface for those who favor graphical interaction. This dual approach makes it accessible to a wider range of users.

Getting Started with Opik

Opik is designed to integrate with LLM systems like OpenAI’s GPT models seamlessly. This allows you to log traces, evaluate results, and monitor performance through every pipeline step. Here’s how to begin.

Log traces for OpenAI LLM calls – Setup Environment

  1. Create an Opik Account: Head over to Comet and create an account. You will need an API key to log traces.
  2. Logging Traces for OpenAI LLM Calls: Opik allows you to log traces for OpenAI calls by wrapping them with the track_openai function. This ensures that every interaction with the LLM is logged, enabling fine-grained analysis.

Installation

You can install Opik using pip:

!pip install --upgrade --quiet opik openai

import opik

opik.configure(use_local=False)

import os

import getpass

if "OPENAI_API_KEY" not in os.environ:

    os.environ["OPENAI_API_KEY"] = getpass.getpass("Enter your OpenAI API key: ")

Opik integrates with OpenAI to provide a simple way to log traces for all OpenAI LLM calls.

Comet provides a hosted version of the Opik platform. You can create an account and grab your API Key.

Log traces for OpenAI LLM calls – Logging traces

from opik.integrations.openai import track_openai

from openai import OpenAI

os.environ["OPIK_PROJECT_NAME"] = "openai-integration-demo"

client = OpenAI()

openai_client = track_openai(client)

prompt = """

Write a short two sentence story about Opik.

"""

completion = openai_client.chat.completions.create(

    model="gpt-3.5-turbo",

    messages=[

        {"role": "user", "content": prompt}

    ]

)

print(completion.choices[0].message.content)

In order to log traces to Opik, we need to wrap our OpenAI calls with the track_openai function.

This example shows how to set up an OpenAI client wrapped by Opik for trace logging and create a chat completion request with a simple prompt.

The prompt and response messages are automatically logged to OPik and can be viewed in the UI.

Opik by Comet

Log traces for OpenAI LLM calls – Logging multi-step traces

from opik import track

from opik.integrations.openai import track_openai

from openai import OpenAI

os.environ["OPIK_PROJECT_NAME"] = "openai-integration-demo"

client = OpenAI()

openai_client = track_openai(client)

@track

def generate_story(prompt):

    res = openai_client.chat.completions.create(

        model="gpt-3.5-turbo",

        messages=[

            {"role": "user", "content": prompt}

        ]

    )

    return res.choices[0].message.content

@track

def generate_topic():

    prompt = "Generate a topic for a story about Opik."

    res = openai_client.chat.completions.create(

        model="gpt-3.5-turbo",

        messages=[

            {"role": "user", "content": prompt}

        ]

    )

    return res.choices[0].message.content

@track

def generate_opik_story():

    topic = generate_topic()

    story = generate_story(topic)

    return story

generate_opik_story()

If you have multiple steps in your LLM pipeline, you can use the track decorator to log the traces for each step.

If OpenAI is called within one of these steps, the LLM call will be associated with that corresponding step.

This example demonstrates how to log traces for multiple steps in a process using the @track decorator, capturing the flow from topic generation to story generation.

Opik by Comet

Opik with Ragas for monitoring and evaluating RAG Systems

!pip install --quiet --upgrade opik ragas

import opik

opik.configure(use_local=False)
  • here are two main ways to use Opik with Ragas:
    • Using Ragas metrics to score traces.
    • Using the Ragas evaluate function to score a dataset.
  • Comet provides a hosted version of the Opik platform. You can create an account and grab your API key from there. 

Example for setting an API key:

import os

import getpass

if "OPENAI_API_KEY" not in os.environ:

    os.environ["OPENAI_API_KEY"] = getpass.getpass("Enter your OpenAI API key: ")

Creating a simple RAG pipeline Using Ragas Metrics

Ragas provides a set of metrics that can be used to evaluate the quality of a RAG pipeline, including but not limited to: answer_relevancy ,answer_similarity , answer_correctness ,context_precision context_recall,context_entity_recall ,summarization_score .

You can find a full list of metrics in the Ragas documentation.

These metrics can be computed on the fly and logged to traces or spans in Opik. For this example, we will start by creating a simple RAG pipeline and then scoring it using the answer_relevancy metric.

# Import the metric

from ragas.metrics import AnswerRelevancy

# Import some additional dependencies

from langchain_openai.chat_models import ChatOpenAI

from langchain_openai.embeddings import OpenAIEmbeddings

from ragas.llms import LangchainLLMWrapper

from ragas.embeddings import LangchainEmbeddingsWrapper

# Initialize the Ragas metric

llm = LangchainLLMWrapper(ChatOpenAI())

emb = LangchainEmbeddingsWrapper(OpenAIEmbeddings())

answer_relevancy_metric = AnswerRelevancy(llm=llm, embeddings=emb)

To use the Ragas metric without using the evaluate function, you need to initialize it with a RunConfig object and an LLM provider. For this example, we will use LangChain as the LLM provider with the Opik tracer enabled.

We will first start by initializing the Ragas metric.

# Run this cell first if you are running this in a Jupyter notebook

import nest_asyncio

nest_asyncio.apply()

import asyncio

from ragas.integrations.opik import OpikTracer

from ragas.dataset_schema import SingleTurnSample

import os

os.environ["OPIK_PROJECT_NAME"] = "ragas-integration"

# Define the scoring function

def compute_metric(metric, row):

    row = SingleTurnSample(**row)

    opik_tracer = OpikTracer(tags=["ragas"])

    async def get_score(opik_tracer, metric, row):

        score = await metric.single_turn_ascore(row, callbacks=[opik_tracer])

        return score

    # Run the async function using the current event loop

    loop = asyncio.get_event_loop()

    result = loop.run_until_complete(get_score(opik_tracer, metric, row))

    return result
  • Once the metric is initialized, you can use it to score a sample question.
  • To do that, first we need to define a scoring function that can take in a record of data with input, context, etc., and score it using the metric we defined earlier.
  • Given that the metric scoring is done asynchronously, you need to use the asyncio library to run the scoring function.
# Score a simple example

row = {

   "user_input": "What is the capital of France?",

   "response": "Paris",

   "retrieved_contexts": ["Paris is the capital of France.", "Paris is in France."],

}

score = compute_metric(answer_relevancy_metric, row)

print("Answer Relevancy score:", score)

If you now navigate to Opik, you will be able to see that a new trace has been created in the Default Project project.

You can use the update_current_trace function to score traces.

This method has the benefit of adding the scoring span to the trace, enabling a more in-depth examination of the RAG process. However, because it calculates the Ragas metric synchronously, it might not be appropriate for use in production scenarios.

from opik import track, opik_context

@track

def retrieve_contexts(question):

    # Define the retrieval function, in this case we will hard code the contexts

    return ["Paris is the capital of France.", "Paris is in France."]

@track

def answer_question(question, contexts):

    # Define the answer function, in this case we will hard code the answer

    return "Paris"

@track(name="Compute Ragas metric score", capture_input=False)

def compute_rag_score(answer_relevancy_metric, question, answer, contexts):

    # Define the score function

    row = {"user_input": question, "response": answer, "retrieved_contexts": contexts}

    score = compute_metric(answer_relevancy_metric, row)

    return score

@track

def rag_pipeline(question):

    # Define the pipeline

    contexts = retrieve_contexts(question)

    answer = answer_question(question, contexts)

    score = compute_rag_score(answer_relevancy_metric, question, answer, contexts)

    opik_context.update_current_trace(

        feedback_scores=[{"name": "answer_relevancy", "value": round(score, 4)}]

    )

    return answer

rag_pipeline("What is the capital of France?")

Evaluating datasets

from datasets import load_dataset

from ragas.metrics import context_precision, answer_relevancy, faithfulness

from ragas import evaluate

from ragas.integrations.opik import OpikTracer

fiqa_eval = load_dataset("explodinggradients/fiqa", "ragas_eval")

# Reformat the dataset to match the schema expected by the Ragas evaluate function

dataset = fiqa_eval["baseline"].select(range(3))

dataset = dataset.map(

    lambda x: {

        "user_input": x["question"],

        "reference": x["ground_truths"][0],

        "retrieved_contexts": x["contexts"],

    }

)

opik_tracer_eval = OpikTracer(tags=["ragas_eval"], metadata={"evaluation_run": True})

result = evaluate(

    dataset,

    metrics=[context_precision, faithfulness, answer_relevancy],

    callbacks=[opik_tracer_eval],

)

print(result)

If you want to assess a dataset, you can use Raga’s evaluate function. When this function is invoked, the Ragas library computes the metrics for every row in the dataset and returns a summary of the results.

Use the OpikTracer callback to log the evaluation results to the Opik platform:

Evaluating LLM Applications with Opik

Evaluating your LLM application allows you to have confidence in its performance. This evaluation set is often performed both during the development and as part of the testing of an application.

The evaluation is done in five steps:

  1. Add tracing to your LLM application.
  2. Define the evaluation task.
  3. Choose the dataset on which you would like to evaluate your application.
  4. Choose the metrics that you would like to evaluate your application with.
  5. Create and run the evaluation experiment.

Add tracing to your LLM application.

from opik import track

from opik.integrations.openai import track_openai

import openai

openai_client = track_openai(openai.OpenAI())

# This method is the LLM application that you want to evaluate

# Typically, this is not updated when creating evaluations

@track

def your_llm_application(input: str) -> str:

    response = openai_client.chat.completions.create(

        model="gpt-3.5-turbo",

        messages=[{"role": "user", "content": input}],

    )

    return response.choices[0].message.content

@track

def your_context_retriever(input: str) -> str:

    return ["..."]
  • While not required, adding tracking to your LLM application is recommended. This allows for full visibility into each evaluation run.
  • The example demonstrates using a combination of the track decorator and the track_openai function to trace the LLM application.

This ensures that responses from the model and context retrieval processes are tracked during evaluation.

Define the evaluation task

def evaluation_task(x: DatasetItem):

    return {

        "input": x.input['user_question'],

        "output": your_llm_application(x.input['user_question']),

        "context": your_context_retriever(x.input['user_question'])

    }
  • You can define the evaluation task after adding instrumentation to your LLM application.
  • The evaluation task takes a dataset item as input and returns a dictionary. The dictionary includes keys that match the parameters expected by the metrics you are using.
  • In this example, the evaluation_task function retrieves the input from the dataset (x.input[‘user_question’]), runs it through the LLM application, and retrieves context using the your_context_retriever method.

This method is used to structure the evaluation data for further analysis.

Choose the Evaluation Data

If you have already created a dataset:

You can use the Opik.get_dataset function to fetch it:

Code Example:

from opik import Opik

client = Opik()

dataset = client.get_dataset(name="your-dataset-name")

If you don’t have a dataset yet:

You can create one using the Opik.create_dataset function:

Code Example:

from opik import Opik

from opik.datasets import DatasetItem

client = Opik()

dataset = client.create_dataset(name="your-dataset-name")

dataset.insert([

    DatasetItem(input="Hello, world!", expected_output="Hello, world!"),

    DatasetItem(input="What is the capital of France?", expected_output="Paris"),

])
  • To fetch an existing dataset, use get_dataset with the dataset name.
  • To create a new dataset, use create_dataset, and you can insert data items into the dataset with the insert function.

Choose the Evaluation Metrics

In the same evaluation experiment, you can use multiple metrics to evaluate your application:

from opik.evaluation.metrics import Equals, Hallucination

equals_metric= Equals()

hallucination_metric=Hallucination()

Opik provides a set of built-in evaluation metrics that you can choose from. These are broken down into two main categories:

  1. Heuristic metrics: These metrics that are deterministic in nature, for example equals or contains
  2. LLM as a judge: These metrics use an LLM to judge the quality of the output, typically these are used for detecting hallucinations or context relevance

Run the evaluation

evaluation= evaluate(experiment_name=”My experiment”,dataset=dataset,task=evaluation_task,scoring_metrics=[hallucination_metric],experiment_config={”model”: Model})

Now that we have the task we want to evaluate, the dataset to evaluate on, the metrics we want to evaluate with, we can run the evaluation.

Conclusion

Opik represents a significant advancement in the tools available for evaluating and monitoring LLM applications. Developers can confidently build trustworthy AI systems by offering comprehensive features for tracing, evaluating, and debugging LLMs within a user-friendly framework. As AI technology advances, tools like Opik will be critical in ensuring these systems operate effectively and reliably in real-world applications.

Also, if you are looking for a Generative AI course online then explore: GenAI Pinnacle Program

Frequently Asked Questions

Q1. What is Opik?

Ans. Opik is an open-source platform developed by Comet to evaluate and monitor LLM (Large Language Model) applications. It helps developers log, trace, and evaluate LLMs to identify and fix issues in both development and production environments.

Q2. Why is evaluating LLMs important?

Ans. Evaluating LLMs and RAG (Retrieval-Augmented Generation) systems ensures more than just accuracy. It covers answer relevancy, context precision, and avoidance of hallucinations, which helps track performance, detect issues, and improve output quality.

Q3. What are the key features of Opik?

Ans. Opik offers features such as end-to-end LLM evaluation, real-time monitoring, seamless integration with testing frameworks like Pytest, and a user-friendly interface, supporting both Python SDK and graphical interaction.

Q4. How does Opik integrate with OpenAI?

Ans. Opik allows you to log traces for OpenAI LLM calls by wrapping them with the track_openai function. This logs each interaction for deeper analysis and debugging of LLM behavior, providing insights into how models respond to different prompts.

Q5. How can Opik and Ragas be used together?

Ans. Opik integrates with Ragas, allowing users to evaluate and monitor RAG systems. Metrics such as answer relevancy and context precision can be computed on the fly and logged into Opik, helping to trace and improve RAG system performance.

Janvi Kumari 09 Oct, 2024

Hi I am Janvi Kumari currently a Associate Insights at Analytics Vidhya, passionate about leveraging data for insights and innovation. Curious, driven, and eager to learn. If you'd like to connect, feel free to reach out to me on LinkedIn

Responses From Readers

Clear