Documentation Index
Fetch the complete documentation index at: https://docs.fallom.com/llms.txt
Use this file to discover all available pages before exploring further.
Get Your API Key
Copy your API key
Go to Settings → API Keys and copy your key
Install the SDK
npm install @fallom/trace
Add 3 Lines of Code
import fallom from "@fallom/trace";
import OpenAI from "openai";
// 1. Initialize once at startup
await fallom.init({ apiKey: process.env.FALLOM_API_KEY });
// 2. Create a session (groups related calls)
const session = fallom.session({
configKey: "my-app",
sessionId: "user-123",
});
// 3. Wrap your client
const openai = session.wrapOpenAI(new OpenAI());
// Done! All calls are now traced
const response = await openai.chat.completions.create({
model: "gpt-4o",
messages: [{ role: "user", content: "Hello!" }],
});
import os
import fallom
from openai import OpenAI
# 1. Initialize once at startup
fallom.init(api_key=os.environ["FALLOM_API_KEY"])
# 2. Create a session (groups related calls)
session = fallom.session(
config_key="my-app",
session_id="user-123",
)
# 3. Wrap your client
openai = session.wrap_openai(OpenAI())
# Done! All calls are now traced
response = openai.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": "Hello!"}]
)
What You Get
Every LLM call is automatically captured:
| Field | Description |
|---|
| Model | Which model was used |
| Tokens | Input, output, and cached counts |
| Latency | Request duration + time to first token |
| Cost | Calculated from token usage |
| Prompts | Full messages sent |
| Completions | Model responses |
| Session | Grouped by user/conversation |
View Your Traces
Open the dashboard to see your LLM calls in real-time
Next Steps
Tracing
Custom spans, metadata, and advanced tracing
Model A/B Testing
Test different models in production
Evals
Run evaluations on your outputs
Integrations
Anthropic, OpenRouter, Vercel AI SDK, and more