Skip to main content

LLM Providers

Synapse supports 14+ LLM providers through a unified interface. The active provider is determined by the model name prefix. You can set a global default model and override it per-agent or per-step.

Provider routing table

PrefixProviderExample models
ollama.Ollama (local)mistral, llama3, qwen2.5, phi4
claude-Anthropicclaude-3-5-sonnet-20241022, claude-opus-4-7
gpt- / o1- / o3-OpenAIgpt-4o, gpt-4o-mini, o1-mini, o3
gemini- / gemma-Google Geminigemini-2.0-flash, gemini-1.5-pro
grok-xAI Grokgrok-3, grok-2-vision
deepseek-DeepSeekdeepseek-chat, deepseek-reasoner
bedrock.AWS Bedrockbedrock.anthropic.claude-3-5-sonnet...
oaic.OpenAI-compatible (cloud)oaic.mistral-7b, oaic.llama-3-70b
locv1.Local v1-compatiblelocv1.mistral, locv1.qwen
cli.claudeClaude CLIcli.claude
cli.geminiGemini CLIcli.gemini
cli.codexOpenAI Codex CLIcli.codex
cli.copilotGitHub Copilot CLIcli.copilot, cli.copilot.claude-sonnet-4-5

Setting the default model

In Settings → LLM:

  1. Choose a Mode: local, cloud, or bedrock
  2. Enter your API key for the selected provider
  3. Set the Default model name
{
"mode": "cloud",
"anthropic_key": "sk-ant-...",
"model": "claude-3-5-sonnet-20241022"
}

Per-agent model override

Each agent can override the global default:

{
"name": "Fast Router",
"model": "claude-haiku-4-5-20251001"
}

Per-step model override

In orchestration steps, set model to use a different model for that step only:

{
"id": "step-classify",
"type": "llm",
"model": "gpt-4o-mini",
"prompt_template": "Classify: {state.input}"
}

This is powerful for cost management: use a cheap model for routing/classification steps and a capable model only where quality matters.

Cost limits

Set max_total_cost_usd on an orchestration to halt execution if costs exceed the budget:

{
"max_total_cost_usd": 0.50
}

Cost is tracked in real time. The orchestration transitions to failed with a cost-limit error if the budget is exceeded.