Test different LLM models to optimize performance, quality, and cost
Run A/B tests on models with zero latency. Same session always gets same model (sticky assignment).
Create and manage your model configs in the dashboard.
Python
TypeScript
Copy
from fallom import models# Get assigned model for this sessionmodel = models.get("summarizer-config", session_id)# Returns: "gpt-4o" or "claude-3-5-sonnet" based on your config weightsagent = Agent(model=model)agent.run(message)
Pin to a specific config version, or use latest (default):
Copy
# Use latest version (default)model = models.get("my-config", session_id)# Pin to specific versionmodel = models.get("my-config", session_id, version=2)