Any LLM
Route your conversations to any AI model
Providers
◈
Anthropic
Claude
Speed72%
Quality95%
Cost62%
◎
OpenAI
GPT-4o
Speed75%
Quality93%
Cost58%
◇Google
Gemini
Speed68%
Quality90%
Cost45%
⊕
Mistral
Mistral
Speed70%
Quality88%
Cost40%
⬡
Meta
Llama
Speed88%
Quality82%
Cost5%
⚡
Groq
Fast inference
Speed99%
Quality82%
Cost4%
Active model
No provider selected. Click ○ on any provider card to activate it.
Routing rules
Use fastest for chat
Route conversational messages to the model with the highest speed score.
Use best for tasks
Route complex tasks to the model with the highest quality score.
Use cheapest for research
Route long research sessions to the most cost-effective model.
meok_llm_router_config
Stored in localStorage. Synced across sessions.