Skip to main content

Installation

npm install @fallom/trace openai

Quick Start

import { trace } from "@fallom/trace";
import OpenAI from "openai";

// Initialize Fallom
await trace.init({ apiKey: process.env.FALLOM_API_KEY });

// Wrap your OpenAI client
const openai = trace.wrapOpenAI(new OpenAI());

// Set session context (configKey, sessionId, userId)
trace.setSession("my-app", "session-123", "user-456");

// Use as normal - automatically traced!
const response = await openai.chat.completions.create({
  model: "gpt-4o",
  messages: [{ role: "user", content: "Hello!" }],
});

console.log(response.choices[0].message.content);

Streaming

Streaming responses are also traced with time-to-first-token:
const stream = await openai.chat.completions.create({
  model: "gpt-4o",
  messages: [{ role: "user", content: "Write a poem about coding." }],
  stream: true,
});

for await (const chunk of stream) {
  process.stdout.write(chunk.choices[0]?.delta?.content || "");
}

With Azure OpenAI

import { trace } from "@fallom/trace";
import { AzureOpenAI } from "openai";

await trace.init({ apiKey: "your-fallom-api-key" });

const azure = trace.wrapOpenAI(
  new AzureOpenAI({
    apiKey: process.env.AZURE_OPENAI_API_KEY,
    endpoint: process.env.AZURE_OPENAI_ENDPOINT,
    apiVersion: "2024-02-01",
  })
);

trace.setSession("my-agent", "session-123");

const response = await azure.chat.completions.create({
  model: "gpt-4o", // Your deployment name
  messages: [{ role: "user", content: "Hello!" }],
});

Multimodal (Images)

Image inputs are automatically handled:
const response = await openai.chat.completions.create({
  model: "gpt-4o",
  messages: [
    {
      role: "user",
      content: [
        { type: "text", text: "What's in this image?" },
        {
          type: "image_url",
          image_url: { url: "https://example.com/image.jpg" },
        },
      ],
    },
  ],
});

Model A/B Testing

import { trace, models } from "@fallom/trace";
import OpenAI from "openai";

// Initialize both trace and models
await trace.init({ apiKey: "your-fallom-api-key" });
models.init({ apiKey: "your-fallom-api-key" });

const openai = trace.wrapOpenAI(new OpenAI());

// Get assigned model for this session
const modelId = await models.get("my-experiment", "session-123", {
  fallback: "gpt-4o-mini",
});

trace.setSession("my-experiment", "session-123");

const response = await openai.chat.completions.create({
  model: modelId, // Uses A/B test assigned model
  messages: [{ role: "user", content: "Hello!" }],
});

What Gets Traced

FieldDescription
Modelgpt-4o, gpt-4o-mini, etc.
DurationTotal request time (ms)
TokensPrompt, completion, cached tokens
CostCalculated from token usage
PromptsAll input messages
CompletionsModel response
SessionYour config key + session ID

Next Steps