Skip to main content
Fallom integrates seamlessly with Agno to trace all your agent LLM calls automatically.
Get your API key from the dashboard.

Installation

bash pip install fallom agno opentelemetry-instrumentation-openai

Quick Start

# Initialize Fallom FIRST
import fallom
fallom.init(api_key="your-api-key")

# Now import Agno and your LLM provider
from agno.agent import Agent
from agno.models.openai import OpenAIChat

# Set session context for tracing
fallom.trace.set_session("my-agent", session_id)

# Create your Agno agent - LLM calls are automatically traced
agent = Agent(
    model=OpenAIChat(id="gpt-4o"),
    instructions="You are a helpful assistant."
)

# All LLM calls within the agent are traced
response = agent.run("What is the capital of France?")

Model A/B Testing with Agno

Test different models with your Agno agents:
import fallom
from fallom import models

fallom.init(api_key="your-api-key")

from agno.agent import Agent
from agno.models.openai import OpenAIChat

# Get assigned model for this session
model_id = models.get("agno-agent", session_id, fallback="gpt-4o")

fallom.trace.set_session("agno-agent", session_id)

# Use the assigned model with your agent
agent = Agent(
    model=OpenAIChat(id=model_id),
    instructions="You are a helpful assistant."
)

response = agent.run("Explain quantum computing")

Prompt Management with Agno

Use managed prompts for your agent instructions:
import fallom
from fallom import prompts

fallom.init(api_key="your-api-key")

from agno.agent import Agent
from agno.models.openai import OpenAIChat

# Get managed prompt for agent instructions
prompt = prompts.get("agent-instructions", variables={
    "persona": "helpful assistant",
    "domain": "customer support"
})

fallom.trace.set_session("agno-agent", session_id)

agent = Agent(
    model=OpenAIChat(id="gpt-4o"),
    instructions=prompt.system
)

response = agent.run("How do I reset my password?")

Next Steps