For a conceptual overview of MCP, see Model Context Protocol.
Registering Functions
Use@llm.register_function() to make any Python function callable by the LLM:
Parameters and Types
Functions support any combination of required and optional parameters:Custom Function Names
Override the function name exposed to the LLM:MCP Servers
MCP servers provide your agent with access to external tools and services. Vision Agents supports both local and remote servers.Local Servers
Run on your machine via stdio:Remote Servers
Connect over HTTP:Connecting to Agent
Pass MCP servers to your agent — tools are automatically discovered and registered:Multiple Servers
Connect multiple MCP servers for access to different services:Complete Example
Combining registered functions with MCP servers:Implementing Function Calling in Custom LLMs
When building a custom LLM plugin, you inherit function calling support from the baseLLM class. The base class provides a FunctionRegistry that stores registered functions and handles execution.
How the Function Registry Works
The baseLLM class (source) provides:
Key Types
The function registry uses these types fromvision_agents.core.llm.llm_types:
| Type | Fields | Description |
|---|---|---|
ToolSchema | name, description, parameters_schema | Function definition for LLMs |
NormalizedToolCallItem | type, name, arguments_json, id | Standardized tool call format |
Required Overrides
To enable function calling, override these methods in your custom LLM:Using Tools in Your LLM
In yoursimple_response() or equivalent method:
Tool Execution Events
The base class automatically emits events when tools execute:| Event | When | Fields |
|---|---|---|
ToolStartEvent | Before tool runs | tool_name, arguments, tool_call_id |
ToolEndEvent | After tool completes | tool_name, success, result/error, execution_time_ms |
Multi-Round Tool Calling
The OpenAI plugin supports multiple tool-calling rounds. If the model needs to call more tools after seeing results, it can continue for up tomax_tool_rounds (default: 3):

