I have been exploring Hugging Face’s SmolAgents to build AI agents in a few lines of code and it worked perfectly for me. From building a research agent to Agentic Rag, it has been a seamless experience. Hugging Face’s SmolAgents provide a lightweight and efficient way to create AI agents for various tasks, such as research assistance, question answering, and more. The simplicity of the framework allows developers to focus on the logic and functionality of their AI agents without getting bogged down by complex configurations. However, debugging multi-agent runs is challenging due to their unpredictable workflows and extensive logs and most of the errors are often “LLM dumb” kind of issues that the model self-corrects in subsequent steps. Finding effective ways to validate and inspect these runs remains a key challenge. This is where OpenTelemetry comes in handy. Let’s see how it works!
Here’s why debugging agent run is difficult:
Log means recording what happens during an agent run. This is important because:
OpenTelemetry is a standard for instrumentation, which means it provides tools to automatically record (or “log”) what’s happening in your software. In this case, it’s used to log agent runs.
Logging agent runs is essential because AI agents are complex and unpredictable. Using OpenTelemetry makes it easy to automatically record and monitor what’s happening, so you can debug issues, improve performance, and ensure everything runs smoothly in production.
This script is setting up a Python environment with specific libraries and configuring OpenTelemetry for tracing. Here’s a step-by-step explanation:
Here I have installed the dependencies, imported required modules and set up OpenTelemetry in terminal.
!pip install smolagents
!pip install arize-phoenix opentelemetry-sdk opentelemetry-exporter-otlp openinference-instrumentation-smolagents
from opentelemetry import trace
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import BatchSpanProcessor
from openinference.instrumentation.smolagents import SmolagentsInstrumentor
from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter
from opentelemetry.sdk.trace.export import ConsoleSpanExporter, SimpleSpanProcessor
endpoint = "http://0.0.0.0:6006/v1/traces"
trace_provider = TracerProvider()
trace_provider.add_span_processor(SimpleSpanProcessor(OTLPSpanExporter(endpoint)))
SmolagentsInstrumentor().instrument(tracer_provider=trace_provider)
This line instruments the smolagents library to automatically generate traces using the configured trace_provider.
You will find all the details here: http://0.0.0.0:6006/v1/traces to inspact your agent’s run.
from smolagents import (
CodeAgent,
ToolCallingAgent,
ManagedAgent,
DuckDuckGoSearchTool,
VisitWebpageTool,
HfApiModel,
)
model = HfApiModel()
agent = ToolCallingAgent(
tools=[DuckDuckGoSearchTool(), VisitWebpageTool()],
model=model,
)
managed_agent = ManagedAgent(
agent=agent,
name="managed_agent",
description="This is an agent that can do web search.",
)
manager_agent = CodeAgent(
tools=[],
model=model,
managed_agents=[managed_agent],
)
manager_agent.run(
"If the US keeps its 2024 growth rate, how many years will it take for the GDP to double?"
)
Here’s how the logs will look:
In conclusion, debugging AI agent runs can be complex due to their unpredictable workflows, extensive logging, and self-correcting minor errors. These challenges highlight the critical role of effective monitoring tools like OpenTelemetry, which provide the visibility and structure needed to streamline debugging, improve performance, and ensure agents operate smoothly. Try it yourself and discover how OpenTelemetry can simplify your AI agent development and debugging process, making it easier to achieve seamless, reliable operations.
Explore the The Agentic AI Pioneer Program to deepen your understanding of Agent AI and unlock its full potential. Join us on this journey to discover innovative insights and applications!