Skip to main content
Manage prompts centrally and A/B test them with zero latency.
Create and manage your prompts in the dashboard.
  • Python
  • TypeScript

Basic Prompt Retrieval

from fallom import prompts

# Get a managed prompt (with template variables)
prompt = prompts.get("onboarding", variables={
    "user_name": "John",
    "company": "Acme"
})

# Use the prompt with any LLM
response = client.chat.completions.create(
    model="gpt-4o",
    messages=[
        {"role": "system", "content": prompt.system},
        {"role": "user", "content": prompt.user}
    ]
)
The prompt object contains:
FieldDescription
keyThe prompt key
versionThe prompt version
systemThe system prompt (with variables replaced)
userThe user template (with variables replaced)

Prompt A/B Testing

Run experiments on different prompt versions:
from fallom import prompts

# Get prompt from A/B test (sticky assignment based on session_id)
prompt = prompts.get_ab("onboarding-test", session_id, variables={
    "user_name": "John"
})

# prompt.ab_test_key and prompt.variant_index are set
# for analytics in your dashboard

Version Pinning

# Use latest version (default)
prompt = prompts.get("my-prompt")

# Pin to specific version
prompt = prompts.get("my-prompt", version=2)

Automatic Trace Tagging

When you call prompts.get() or prompts.get_ab(), the next LLM call is automatically tagged with the prompt information. This allows you to see which prompts are used in your traces without any extra code.
# Get prompt - sets up auto-tagging for next LLM call
prompt = prompts.get("onboarding", variables={"user_name": "John"})

# This call is automatically tagged with prompt_key, prompt_version, etc.
response = client.chat.completions.create(
    model="gpt-4o",
    messages=[
        {"role": "system", "content": prompt.system},
        {"role": "user", "content": prompt.user}
    ]
)

Next Steps