interlocute.ai beta

Model Routing

Provider-agnostic AI execution. Choose the best model for each use case and switch providers without changing a line of code or breaking integrations.

What is model routing?

Model routing lets you choose which LLM provider powers your node — and change it at any time without modifying your integration. Interlocute provides a unified API surface that abstracts away provider-specific differences in request format, authentication, and response structure.

Why it matters

LLM providers evolve rapidly. New models launch, pricing changes, and quality varies by use case. Being locked into a single provider means you cannot optimize for cost, latency, or capability. Model routing gives you the freedom to choose the best model for each node without rewriting your application.

How Interlocute helps

Configure the model on your node through the dashboard or API. Interlocute handles the translation layer — your application always calls the same endpoint with the same request format. When you switch from one provider to another, your integration code does not change.

Consistent contract

Regardless of the underlying model, your node exposes the same API contract: threads, streaming, tool use, and memory all work identically. This consistency makes it safe to experiment with different models in production without risking integration stability.

Frequently Asked Questions

Model Routing

What is model routing and why does it matter?
Model routing allows you to choose and switch the underlying LLM provider for each node without changing your application code. It matters because it protects you from vendor lock-in and lets you optimize each node for cost, latency, or capability independently.
Which LLM providers does Interlocute support?
Interlocute supports major LLM providers including OpenAI models. The platform is designed for provider expansion — new models are added as they become available. Check the docs for the current list of supported models.
Can I switch models without breaking my integration?
Yes. Your node's API endpoint, request format, and response structure remain the same regardless of the underlying model. Switching providers is a configuration change that takes effect immediately.
Can different nodes use different models?
Yes. Each node is independently configured. You can run one node on GPT-4o for complex reasoning and another on a smaller model for simple classification — each optimized for its specific use case.
Does model routing affect streaming and tool use?
No. Streaming, tool use, memory, and all other node features work consistently regardless of the selected model. Interlocute normalizes provider-specific behaviors behind a unified contract.
How does model routing affect pricing?
Interlocute applies a consistent markup on top of the underlying model's token pricing. When you switch models, your per-token cost changes to reflect the new model's pricing, but the billing structure remains the same.

Ready to build with Model Routing?

Deploy your node in seconds and start using Model Routing today.