Documentation Index
Fetch the complete documentation index at: https://docs.fallom.com/llms.txt
Use this file to discover all available pages before exploring further.
OpenRouter provides access to 200+ LLM models through a single API. Use the Fallom SDK for full tracing with Model A/B Testing and Prompt Management.
Installation
npm install @fallom/trace openai
Quick Start
OpenRouter uses the OpenAI-compatible API:
import fallom from "@fallom/trace";
import OpenAI from "openai";
// Initialize Fallom once at app startup
await fallom.init({ apiKey: process.env.FALLOM_API_KEY });
// Create a session for this conversation/request
const session = fallom.session({
configKey: "my-app",
sessionId: "session-123",
customerId: "user-456",
});
// Wrap OpenAI client with OpenRouter base URL
const openrouter = session.wrapOpenAI(
new OpenAI({
baseURL: "https://openrouter.ai/api/v1",
apiKey: process.env.OPENROUTER_API_KEY,
})
);
// Use any OpenRouter model
const response = await openrouter.chat.completions.create({
model: "openai/gpt-4o-mini",
messages: [{ role: "user", content: "Hello!" }],
});
console.log(response.choices[0].message.content);
Available Models
OpenRouter supports 200+ models:
// OpenAI
model: "openai/gpt-4o";
model: "openai/gpt-4o-mini";
// Anthropic
model: "anthropic/claude-sonnet-4-20250514";
model: "anthropic/claude-3-opus";
// Google
model: "google/gemini-1.5-pro";
model: "google/gemini-1.5-flash";
// Meta
model: "meta-llama/llama-3.1-70b-instruct";
// And many more...
With Vercel AI SDK
import fallom from "@fallom/trace";
import * as ai from "ai";
import { createOpenAI } from "@ai-sdk/openai";
await fallom.init({ apiKey: "your-fallom-api-key" });
const session = fallom.session({
configKey: "my-agent",
sessionId: "session-123",
});
const { generateText, streamText } = session.wrapAISDK(ai);
const openrouter = createOpenAI({
baseURL: "https://openrouter.ai/api/v1",
apiKey: process.env.OPENROUTER_API_KEY,
});
const { text } = await generateText({
model: openrouter("openai/gpt-4o-mini"),
prompt: "Hello!",
});
Model A/B Testing
Test different models from different providers:
import fallom from "@fallom/trace";
import OpenAI from "openai";
await fallom.init({ apiKey: "your-fallom-api-key" });
const session = fallom.session({
configKey: "my-experiment",
sessionId: "session-123",
});
const openrouter = session.wrapOpenAI(
new OpenAI({
baseURL: "https://openrouter.ai/api/v1",
apiKey: process.env.OPENROUTER_API_KEY,
})
);
// Get assigned model - could be GPT-4o, Claude, Gemini, etc.
const modelId = await session.getModel({ fallback: "openai/gpt-4o-mini" });
const response = await openrouter.chat.completions.create({
model: modelId,
messages: [{ role: "user", content: "Hello!" }],
});
Streaming
const stream = await openrouter.chat.completions.create({
model: "openai/gpt-4o-mini",
messages: [{ role: "user", content: "Write a poem." }],
stream: true,
});
for await (const chunk of stream) {
process.stdout.write(chunk.choices[0]?.delta?.content || "");
}
What Gets Traced
| Field | Description |
|---|
| Model | Full model path (e.g., openai/gpt-4o) |
| Duration | Total request time (ms) |
| Tokens | Prompt, completion tokens |
| Cost | Calculated from token usage |
| Prompts | All input messages |
| Completions | Model response |
| Session | Your config key + session ID |
SDK vs Broadcast
| Feature | SDK (this page) | Broadcast |
|---|
| Tracing | ✅ | ✅ |
| Token tracking | ✅ | ✅ |
| Cost tracking | ✅ | ✅ |
| Session grouping | ✅ | ✅ |
| Model A/B Testing | ✅ | ❌ |
| Prompt Management | ✅ | ❌ |
Next Steps
OpenRouter Broadcast
SDK-free tracing via OpenRouter.
Model A/B Testing
Test models across providers.