LLM Providers
Synapse supports 14+ LLM providers through a unified interface. The active provider is determined by the model name prefix. You can set a global default model and override it per-agent or per-step.
Provider routing table
| Prefix | Provider | Example models |
|---|---|---|
ollama. | Ollama (local) | mistral, llama3, qwen2.5, phi4 |
claude- | Anthropic | claude-3-5-sonnet-20241022, claude-opus-4-7 |
gpt- / o1- / o3- | OpenAI | gpt-4o, gpt-4o-mini, o1-mini, o3 |
gemini- / gemma- | Google Gemini | gemini-2.0-flash, gemini-1.5-pro |
grok- | xAI Grok | grok-3, grok-2-vision |
deepseek- | DeepSeek | deepseek-chat, deepseek-reasoner |
bedrock. | AWS Bedrock | bedrock.anthropic.claude-3-5-sonnet... |
oaic. | OpenAI-compatible (cloud) | oaic.mistral-7b, oaic.llama-3-70b |
locv1. | Local v1-compatible | locv1.mistral, locv1.qwen |
cli.claude | Claude CLI | cli.claude |
cli.gemini | Gemini CLI | cli.gemini |
cli.codex | OpenAI Codex CLI | cli.codex |
cli.copilot | GitHub Copilot CLI | cli.copilot, cli.copilot.claude-sonnet-4-5 |
Setting the default model
In Settings → LLM:
- Choose a Mode:
local,cloud, orbedrock - Enter your API key for the selected provider
- Set the Default model name
{
"mode": "cloud",
"anthropic_key": "sk-ant-...",
"model": "claude-3-5-sonnet-20241022"
}
Per-agent model override
Each agent can override the global default:
{
"name": "Fast Router",
"model": "claude-haiku-4-5-20251001"
}
Per-step model override
In orchestration steps, set model to use a different model for that step only:
{
"id": "step-classify",
"type": "llm",
"model": "gpt-4o-mini",
"prompt_template": "Classify: {state.input}"
}
This is powerful for cost management: use a cheap model for routing/classification steps and a capable model only where quality matters.
Cost limits
Set max_total_cost_usd on an orchestration to halt execution if costs exceed the budget:
{
"max_total_cost_usd": 0.50
}
Cost is tracked in real time. The orchestration transitions to failed with a cost-limit error if the budget is exceeded.