Documentation Index Fetch the complete documentation index at: https://docs.fallom.com/llms.txt
Use this file to discover all available pages before exploring further.
Installation
npm install @fallom/trace @anthropic-ai/sdk
Quick Start
import fallom from "@fallom/trace" ;
import Anthropic from "@anthropic-ai/sdk" ;
// Initialize Fallom once at app startup
await fallom . init ({ apiKey: process . env . FALLOM_API_KEY });
// Create a session for this conversation/request
const session = fallom . session ({
configKey: "my-app" ,
sessionId: "session-123" ,
customerId: "user-456" ,
});
// Wrap your Anthropic client
const anthropic = session . wrapAnthropic ( new Anthropic ());
// Use as normal - automatically traced!
const response = await anthropic . messages . create ({
model: "claude-sonnet-4-20250514" ,
max_tokens: 1024 ,
messages: [{ role: "user" , content: "Hello!" }],
});
console . log ( response . content [ 0 ]. text );
With System Prompt
const response = await anthropic . messages . create ({
model: "claude-sonnet-4-20250514" ,
max_tokens: 1024 ,
system: "You are a helpful assistant who speaks like a pirate." ,
messages: [{ role: "user" , content: "Tell me about the weather." }],
});
Streaming
const stream = await anthropic . messages . stream ({
model: "claude-sonnet-4-20250514" ,
max_tokens: 1024 ,
messages: [{ role: "user" , content: "Write a poem about coding." }],
});
for await ( const event of stream ) {
if ( event . type === "content_block_delta" ) {
process . stdout . write ( event . delta . text || "" );
}
}
With Extended Thinking
const response = await anthropic . messages . create ({
model: "claude-sonnet-4-20250514" ,
max_tokens: 16000 ,
thinking: {
type: "enabled" ,
budget_tokens: 10000 ,
},
messages: [{ role: "user" , content: "Solve this complex problem..." }],
});
// Access thinking and response
const thinking = response . content . find (( c ) => c . type === "thinking" );
const text = response . content . find (( c ) => c . type === "text" );
Model A/B Testing
import fallom from "@fallom/trace" ;
import Anthropic from "@anthropic-ai/sdk" ;
await fallom . init ({ apiKey: "your-fallom-api-key" });
const session = fallom . session ({
configKey: "my-experiment" ,
sessionId: "session-123" ,
});
const anthropic = session . wrapAnthropic ( new Anthropic ());
// Get assigned model for this session
const modelId = await session . getModel ({ fallback: "claude-sonnet-4-20250514" });
const response = await anthropic . messages . create ({
model: modelId ,
max_tokens: 1024 ,
messages: [{ role: "user" , content: "Hello!" }],
});
What Gets Traced
Field Description Model claude-sonnet-4-20250514, etc.Duration Total request time (ms) Tokens Input, output tokens Cost Calculated from token usage Prompts System + user messages Completions Model response Session Your config key + session ID
Next Steps
Model A/B Testing Compare Claude models in production.
OpenAI Also using OpenAI? See that guide.