Skip to main content

Get Your API Key

1

Sign up

Create a free account at app.fallom.com
2

Copy your API key

Go to Settings → API Keys and copy your key

Install the SDK

npm install @fallom/trace

Add 3 Lines of Code

import fallom from "@fallom/trace";
import OpenAI from "openai";

// 1. Initialize once at startup
await fallom.init({ apiKey: process.env.FALLOM_API_KEY });

// 2. Create a session (groups related calls)
const session = fallom.session({
  configKey: "my-app",
  sessionId: "user-123",
});

// 3. Wrap your client
const openai = session.wrapOpenAI(new OpenAI());

// Done! All calls are now traced
const response = await openai.chat.completions.create({
  model: "gpt-4o",
  messages: [{ role: "user", content: "Hello!" }],
});

What You Get

Every LLM call is automatically captured:
FieldDescription
ModelWhich model was used
TokensInput, output, and cached counts
LatencyRequest duration + time to first token
CostCalculated from token usage
PromptsFull messages sent
CompletionsModel responses
SessionGrouped by user/conversation

View Your Traces

Open the dashboard to see your LLM calls in real-time

Next Steps