Initialize
fallom.init(api_key=None, base_url=None, capture_content=True)
Initialize the SDK. Call this before importing LLM libraries for auto-instrumentation.| Parameter | Type | Description |
|---|
api_key | str | Your Fallom API key (or use FALLOM_API_KEY env var) |
base_url | str | API endpoint (default:https://spans.fallom.com) |
capture_content | bool | Whether to capture prompt/completion text (default: True) |
Get Model
fallom.models.get(config_key, session_id, version=None, fallback=None) -> str
Get model assignment for a session.| Parameter | Type | Description |
|---|
config_key | str | Your config name from the dashboard |
session_id | str | Unique session/conversation ID (sticky assignment) |
version | int | Pin to specific version (default: latest) |
fallback | str | Model to return if anything fails |
Get Prompt
fallom.prompts.get(prompt_key, variables=None, version=None) -> PromptResult
Get a managed prompt.| Parameter | Type | Description |
|---|
prompt_key | str | Your prompt key from the dashboard |
variables | dict | Template variables (e.g., {"user_name": "John"}) |
version | int | Pin to specific version (default: latest) |
Returns: PromptResult with key, version, system, user
Get Prompt A/B Test
fallom.prompts.get_ab(ab_test_key, session_id, variables=None) -> PromptResult
Get a prompt from an A/B test (sticky assignment).| Parameter | Type | Description |
|---|
ab_test_key | str | Your A/B test key from the dashboard |
session_id | str | Unique session/conversation ID (for sticky assignment) |
variables | dict | Template variables |
Returns: PromptResult with key, version, system, user, ab_test_key, variant_index
Set Session
fallom.trace.set_session(config_key, session_id)
Set trace context. All subsequent LLM calls will be tagged with this config_key and session_id.
Clear Session
fallom.trace.clear_session()
Clear trace context.
Record Custom Metrics
fallom.trace.span(data, config_key=None, session_id=None)
Record custom business metrics.| Parameter | Type | Description |
|---|
data | dict | Metrics to record |
config_key | str | Optional if set_session() was called |
session_id | str | Optional if set_session() was called |
Supported LLM Providers
Auto-instrumentation available for:
- OpenAI (+ OpenAI-compatible APIs: OpenRouter, LiteLLM, vLLM, Ollama, etc.)
- Anthropic
- Cohere
- AWS Bedrock
- Google Generative AI
- Mistral AI
- LangChain
- Replicate
- Vertex AI
Install the corresponding opentelemetry-instrumentation-* package for your provider.You must use the official SDK for your provider. Raw HTTP requests (e.g., requests.post()) will not be traced. For OpenAI-compatible APIs, use the OpenAI SDK with a custom base_url.