Skip to main content
OpenRouter provides access to 200+ LLM models through a single API. Use the Fallom SDK for full tracing with Model A/B Testing and Prompt Management.
For SDK-free observability, see OpenRouter Broadcast.

Installation

npm install @fallom/trace openai

Quick Start

OpenRouter uses the OpenAI-compatible API:
import { trace } from "@fallom/trace";
import OpenAI from "openai";

// Initialize Fallom
await trace.init({ apiKey: process.env.FALLOM_API_KEY });

// Wrap OpenAI client with OpenRouter base URL
const openrouter = trace.wrapOpenAI(
  new OpenAI({
    baseURL: "https://openrouter.ai/api/v1",
    apiKey: process.env.OPENROUTER_API_KEY,
  })
);

// Set session context (configKey, sessionId, userId)
trace.setSession("my-app", "session-123", "user-456");

// Use any OpenRouter model
const response = await openrouter.chat.completions.create({
  model: "openai/gpt-4o-mini",
  messages: [{ role: "user", content: "Hello!" }],
});

console.log(response.choices[0].message.content);

Available Models

OpenRouter supports 200+ models:
// OpenAI
model: "openai/gpt-4o";
model: "openai/gpt-4o-mini";

// Anthropic
model: "anthropic/claude-sonnet-4-20250514";
model: "anthropic/claude-3-opus";

// Google
model: "google/gemini-1.5-pro";
model: "google/gemini-1.5-flash";

// Meta
model: "meta-llama/llama-3.1-70b-instruct";

// And many more...

With Vercel AI SDK

import { trace } from "@fallom/trace";
import * as ai from "ai";
import { createOpenAI } from "@ai-sdk/openai";

await trace.init({ apiKey: "your-fallom-api-key" });

const { generateText, streamText } = trace.wrapAISDK(ai);

const openrouter = createOpenAI({
  baseURL: "https://openrouter.ai/api/v1",
  apiKey: process.env.OPENROUTER_API_KEY,
});

trace.setSession("my-agent", "session-123");

const { text } = await generateText({
  model: openrouter("openai/gpt-4o-mini"),
  prompt: "Hello!",
});

Model A/B Testing

Test different models from different providers:
import { trace, models } from "@fallom/trace";
import OpenAI from "openai";

// Initialize both trace and models
await trace.init({ apiKey: "your-fallom-api-key" });
models.init({ apiKey: "your-fallom-api-key" });

const openrouter = trace.wrapOpenAI(
  new OpenAI({
    baseURL: "https://openrouter.ai/api/v1",
    apiKey: process.env.OPENROUTER_API_KEY,
  })
);

// Get assigned model - could be GPT-4o, Claude, Gemini, etc.
const modelId = await models.get("my-experiment", "session-123", {
  fallback: "openai/gpt-4o-mini",
});

trace.setSession("my-experiment", "session-123");

const response = await openrouter.chat.completions.create({
  model: modelId,
  messages: [{ role: "user", content: "Hello!" }],
});

Streaming

const stream = await openrouter.chat.completions.create({
  model: "openai/gpt-4o-mini",
  messages: [{ role: "user", content: "Write a poem." }],
  stream: true,
});

for await (const chunk of stream) {
  process.stdout.write(chunk.choices[0]?.delta?.content || "");
}

What Gets Traced

FieldDescription
ModelFull model path (e.g., openai/gpt-4o)
DurationTotal request time (ms)
TokensPrompt, completion tokens
CostCalculated from token usage
PromptsAll input messages
CompletionsModel response
SessionYour config key + session ID

SDK vs Broadcast

FeatureSDK (this page)Broadcast
Tracing
Token tracking
Cost tracking
Session grouping
Model A/B Testing
Prompt Management

Next Steps