Wireclaw routes all LLM requests through Sulaert , an internal model router supporting 13+ providers. Switch models by changing one line in your config — no code changes needed.
model = " claude-sonnet-4-5 " # Change this to switch models
Model ID Best for claude-opus-4-5Complex reasoning, analysis, long-form content claude-sonnet-4-5Best balance of speed and capability claude-haiku-3-5Fast responses, simple tasks, cost-efficient
Model ID Best for gpt-4oMultimodal, general purpose gpt-4o-miniFast, cost-efficient o3Advanced reasoning
Model ID Best for gemini-2.5-proLong context, multimodal gemini-2.5-flashFast, cost-efficient
Model ID Best for deepseek-r1Reasoning, math, code deepseek-chatGeneral conversation
Model ID Best for mistral-largeComplex tasks, multilingual mistral-smallFast, cost-efficient
Model ID Best for grok-3General purpose, real-time knowledge
Provider Setup Ollama (local models)Set OLLAMA_HOST env var Azure OpenAI Set AZURE_OPENAI_ENDPOINT and AZURE_OPENAI_API_KEY AWS Bedrock Set AWS credentials in env vars Any OpenAI-compatible endpoint Set OPENAI_API_URL to your endpoint
Automatic failover: If the primary provider is unavailable, Sulaert automatically retries with a fallback provider.
Hint-based routing: Use hints to influence provider selection:
model = " hint:fast " # Routes to the fastest available model
Provider timeout: Configure the maximum wait time for a provider response:
provider_timeout_secs = 120 # Default: 120 seconds
Caution
Not all providers support all features. Vision, tool use, and extended thinking availability varies by model. Check provider docs for specifics.
By default, Wireclaw uses platform API keys (costs included in your per-token pricing). To use your own provider keys:
ANTHROPIC_API_KEY = " sk-ant-... "
OPENAI_API_KEY = " sk-... "