If you’re using OpenRouter , you can send traces to Fallom automatically using OpenRouter’s Broadcast feature. No SDK installation required.
Broadcast provides observability only . For Model A/B Testing and Prompt
Management, use the Fallom SDK .
What You Get
Feature Broadcast SDK Trace logging ✅ ✅ Token tracking ✅ ✅ Cost tracking ✅ ✅ Session grouping ✅ ✅ Customer tracking ✅ ✅ Model A/B Testing ❌ ✅ Prompt Management ❌ ✅ Prompt A/B Testing ❌ ✅
Setup (5 minutes)
1. Get Your Fallom API Key
Go to app.fallom.com/settings and copy your API key.
Go to OpenRouter Broadcast Settings
Click Add destination
Select OTel Collector
Enter:
Endpoint:
https://broadcast.fallom.com/v1/traces
Headers:
{
"Authorization" : "Bearer YOUR_FALLOM_API_KEY"
}
Click Save
That’s it! All your OpenRouter requests will now appear in Fallom.
You can also set broadcast headers on each request instead of configuring in OpenRouter’s dashboard:
import OpenAI from "openai" ;
const openrouter = new OpenAI ({
apiKey: process . env . OPENROUTER_API_KEY ,
baseURL: "https://openrouter.ai/api/v1" ,
defaultHeaders: {
"X-Broadcast-URL" : "https://broadcast.fallom.com/v1/traces" ,
"X-Broadcast-Auth" : "Bearer YOUR_FALLOM_API_KEY" ,
},
});
const response = await openrouter . chat . completions . create ({
model: "openai/gpt-4o-mini" ,
messages: [{ role: "user" , content: "Hello!" }],
// @ts-ignore - OpenRouter-specific fields
session_id: "conversation-123" ,
user: "customer-456" ,
});
This approach is useful for testing or when you want different API keys per environment.
Adding Session & User Tracking
Include session_id and user in your OpenRouter request body :
Fetch
OpenAI SDK
Vercel AI SDK
Python
const response = await fetch ( "https://openrouter.ai/api/v1/chat/completions" , {
method: "POST" ,
headers: {
Authorization: `Bearer ${ OPENROUTER_API_KEY } ` ,
"Content-Type" : "application/json" ,
},
body: JSON . stringify ({
model: "openai/gpt-4o-mini" ,
messages: [{ role: "user" , content: "Hello!" }],
// These get included in your Fallom traces
session_id: "conversation-123" , // Groups related requests
user: "customer-456" , // Identifies end users
}),
});
import OpenAI from "openai" ;
const client = new OpenAI ({
baseURL: "https://openrouter.ai/api/v1" ,
apiKey: process . env . OPENROUTER_API_KEY ,
});
const response = await client . chat . completions . create ({
model: "openai/gpt-4o-mini" ,
messages: [{ role: "user" , content: "Hello!" }],
// @ts-ignore - OpenRouter-specific fields
session_id: "conversation-123" ,
user: "customer-456" ,
});
Vercel AI SDK doesn’t pass custom body fields to OpenRouter. For session tracking
with Vercel AI SDK + Broadcast, use the Fallom SDK wrapper
or the OpenAI SDK tab below. // Recommended: Use Fallom SDK session wrapper for Vercel AI SDK
import fallom from "@fallom/trace" ;
import * as ai from "ai" ;
import { createOpenAI } from "@ai-sdk/openai" ;
await fallom . init ({ apiKey: "your-fallom-api-key" });
const session = fallom . session ({
configKey: "my-app" ,
sessionId: "conversation-123" ,
customerId: "customer-456" ,
});
const { generateText } = session . wrapAISDK ( ai );
const openrouter = createOpenAI ({
baseURL: "https://openrouter.ai/api/v1" ,
apiKey: process . env . OPENROUTER_API_KEY ,
});
const { text } = await generateText ({
model: openrouter ( "openai/gpt-4o-mini" ),
prompt: "Hello!" ,
});
from openai import OpenAI
client = OpenAI(
base_url = "https://openrouter.ai/api/v1" ,
api_key = "your-openrouter-key"
)
response = client.chat.completions.create(
model = "openai/gpt-4o-mini" ,
messages = [{ "role" : "user" , "content" : "Hello!" }],
extra_body = {
"session_id" : "conversation-123" ,
"user" : "customer-456"
}
)
Available Fields
Field Description Fallom Mapping session_idGroups related requests (conversations, agent runs) session_iduserIdentifies end users (up to 128 chars) customer_id
These fields must be in the request body , not headers. OpenRouter only
broadcasts body fields to OTEL destinations.
Structured Output (JSON Mode) with Broadcast
For structured output with broadcast, use the OpenAI SDK with response_format:
import OpenAI from "openai" ;
const openrouter = new OpenAI ({
apiKey: process . env . OPENROUTER_API_KEY ,
baseURL: "https://openrouter.ai/api/v1" ,
defaultHeaders: {
"X-Broadcast-URL" : "https://broadcast.fallom.com/v1/traces" ,
"X-Broadcast-Auth" : "Bearer YOUR_FALLOM_API_KEY" ,
},
});
const response = await openrouter . chat . completions . create ({
model: "openai/gpt-4o-mini" ,
messages: [
{
role: "system" ,
content:
"Output JSON only. Format: {name: string, price: number, description: string}" ,
},
{ role: "user" , content: "Generate a tech gadget." },
],
response_format: { type: "json_object" },
// @ts-ignore - OpenRouter extensions
session_id: "conversation-123" ,
user: "customer-456" ,
});
const product = JSON . parse ( response . choices [ 0 ]. message . content ! );
console . log ( product ); // { name: "...", price: 99.99, description: "..." }
For Vercel AI SDK’s generateObject/streamObject with session tracking, use
the Fallom SDK wrapper instead of
broadcast. The SDK wrapper provides full tracing with fallom.session().
What Gets Traced
Every OpenRouter request automatically includes:
Field Description Model Full model path (e.g., openai/gpt-4o) Provider Which provider served the request Duration Request latency Tokens Input, output, cached tokens Cost OpenRouter’s cost calculation Prompts Input messages Completions Model response Session Your session_id (if provided) Customer Your user ID (if provided)
Troubleshooting
Traces not appearing
Verify Broadcast is enabled in OpenRouter settings
Check the endpoint is exactly https://broadcast.fallom.com/v1/traces
Confirm your API key starts with flm_
Check the Authorization header format: Bearer YOUR_API_KEY (with a space)
401 Unauthorized
Your API key is invalid or the header format is wrong:
{
"Authorization" : "Bearer flm_xxxxx"
}
Missing session or customer data
Include session_id and user in the request body , not as headers.
Next Steps
OpenRouter SDK Use SDK for A/B testing and prompts.
View Dashboard See your traces.