OpenRouter is a unified API that provides access to multiple LLM providers through a single OpenAI-compatible interface. It enables developers to easily switch between different models from various providers (Anthropic, Google, OpenAI, and more) without changing their code.
The OpenRouter plugin in the Vision Agents SDK provides LLM capabilities using OpenRouter’s API, making it easy to use any supported model in your voice and video agents.
Features
- Access to multiple LLM providers through a single API
- OpenAI-compatible interface for easy integration
- Support for various models including Claude, Gemini, GPT, and more
- Automatic conversion of instructions to system messages
- Manual conversation history management
Installation
Install the OpenRouter plugin with
uv add vision-agents[openrouter]
Quick Start
from vision_agents.core import User, Agent
from vision_agents.plugins import openrouter, getstream, elevenlabs, deepgram, smart_turn
agent = Agent(
edge=getstream.Edge(),
agent_user=User(name="OpenRouter AI", id="agent"),
instructions="Be helpful and friendly to the user",
llm=openrouter.LLM(
model="anthropic/claude-haiku-4.5",
),
tts=elevenlabs.TTS(),
stt=deepgram.STT(),
turn_detection=smart_turn.TurnDetection(),
)
To initialize without passing in the API key, make sure the OPENROUTER_API_KEY is available as an environment variable. You can do this either by defining it in a .env file or exporting it directly in your terminal.
Example
Check out our OpenRouter example to see a practical implementation of the plugin.
Initialization
The OpenRouter plugin provides the LLM class:
from vision_agents.plugins import openrouter
llm = openrouter.LLM()
# Or with custom configuration
llm = openrouter.LLM(
api_key="your-api-key",
model="anthropic/claude-haiku-4.5",
)
Parameters
| Name | Type | Default | Description |
|---|
api_key | str or None | None | OpenRouter API key. If not provided, uses OPENROUTER_API_KEY environment variable. |
base_url | str | "https://openrouter.ai/api/v1" | OpenRouter API base URL. |
model | str | "openrouter/andromeda-alpha" | Model identifier to use. |
**kwargs | - | - | Additional arguments passed to the underlying OpenAI LLM implementation. |
Available Models
OpenRouter provides access to models from multiple providers. Here are some popular options:
| Provider | Model ID | Description |
|---|
| Anthropic | anthropic/claude-haiku-4.5 | Fast and efficient Claude model |
| Anthropic | anthropic/claude-sonnet-4 | Balanced Claude model |
| Anthropic | anthropic/claude-opus-4 | Most capable Claude model |
| Google | google/gemini-2.5-flash | Fast Gemini model |
| Google | google/gemini-2.5-pro | Advanced Gemini model |
| OpenAI | openai/gpt-4o | GPT-4o model |
| OpenAI | openai/gpt-4o-mini | Smaller GPT-4o model |
| OpenRouter | openrouter/andromeda-alpha | OpenRouter’s own model |
For a complete list of available models, visit OpenRouter Models.
Usage with Agent
Here’s a complete example using OpenRouter with a voice agent:
import logging
from dotenv import load_dotenv
from vision_agents.core import User, Agent, cli
from vision_agents.core.agents import AgentLauncher
from vision_agents.plugins import openrouter, getstream, elevenlabs, deepgram
logger = logging.getLogger(__name__)
load_dotenv()
async def create_agent(**kwargs) -> Agent:
agent = Agent(
edge=getstream.Edge(),
agent_user=User(name="AI Assistant", id="agent"),
instructions="You are a helpful AI assistant. Be friendly and conversational.",
llm=openrouter.LLM(model="anthropic/claude-haiku-4.5"),
tts=elevenlabs.TTS(),
stt=deepgram.STT(),
)
return agent
async def join_call(agent: Agent, call_type: str, call_id: str, **kwargs) -> None:
await agent.create_user()
call = await agent.create_call(call_type, call_id)
logger.info("🤖 Starting OpenRouter Agent...")
with await agent.join(call):
await agent.edge.open_demo(call)
await agent.finish()
if __name__ == "__main__":
cli(AgentLauncher(create_agent=create_agent, join_call=join_call))
Switching Models
One of the key benefits of OpenRouter is the ability to easily switch between models:
from vision_agents.plugins import openrouter
# Use Claude for complex reasoning
llm_claude = openrouter.LLM(model="anthropic/claude-sonnet-4")
# Use GPT-4o for general tasks
llm_gpt = openrouter.LLM(model="openai/gpt-4o")
# Use Gemini for multimodal tasks
llm_gemini = openrouter.LLM(model="google/gemini-2.5-pro")
Links