Skip to content

Supported Models

Wireclaw routes all LLM requests through Sulaert, an internal model router supporting 13+ providers. Switch models by changing one line in your config — no code changes needed.

[agent]
model = "claude-sonnet-4-5" # Change this to switch models
Model IDBest for
claude-opus-4-5Complex reasoning, analysis, long-form content
claude-sonnet-4-5Best balance of speed and capability
claude-haiku-3-5Fast responses, simple tasks, cost-efficient
Model IDBest for
gpt-4oMultimodal, general purpose
gpt-4o-miniFast, cost-efficient
o3Advanced reasoning
Model IDBest for
gemini-2.5-proLong context, multimodal
gemini-2.5-flashFast, cost-efficient
Model IDBest for
deepseek-r1Reasoning, math, code
deepseek-chatGeneral conversation
Model IDBest for
mistral-largeComplex tasks, multilingual
mistral-smallFast, cost-efficient
Model IDBest for
grok-3General purpose, real-time knowledge
ProviderSetup
Ollama (local models)Set OLLAMA_HOST env var
Azure OpenAISet AZURE_OPENAI_ENDPOINT and AZURE_OPENAI_API_KEY
AWS BedrockSet AWS credentials in env vars
Any OpenAI-compatible endpointSet OPENAI_API_URL to your endpoint

Automatic failover: If the primary provider is unavailable, Sulaert automatically retries with a fallback provider.

Hint-based routing: Use hints to influence provider selection:

[agent]
model = "hint:fast" # Routes to the fastest available model

Provider timeout: Configure the maximum wait time for a provider response:

[advanced]
provider_timeout_secs = 120 # Default: 120 seconds

By default, Wireclaw uses platform API keys (costs included in your per-token pricing). To use your own provider keys:

[env]
ANTHROPIC_API_KEY = "sk-ant-..."
OPENAI_API_KEY = "sk-..."