Skip to main content
MCP and Function calling allow AI systems to call and retrieve information from external services. Using our MCP and tool calling annotations, developers can call external services directly from Python.

How does function calling work?

Function calling can reach out to external services, perform calculations, and access real-time data all while having a conversation. You can use it for teaching your AI to use tools, like using the calculator, searching the web, or checking the weather. Here’s the flow behind function calling:
  1. You define functions - These are tools you define for the AI (LLM/Realtime) to use
  2. The AI decides when to use them - Based on what the user asks, the AI figures out which tools it needs
  3. Functions get called automatically - The AI executes the right function with the right parameters
  4. Results flow back naturally - The AI incorporates the results into its response

Defining functions for agents to use

Making functions available to your AI agent is straightforward. You simply use the @llm.register_function() decorator on any Python function you want the AI to be able to call.

Basic Function Registration

from vision_agents.plugins import openai

# Create your LLM
llm = openai.LLM(model="gpt-4o-mini")

# Register a simple function
@llm.register_function(description="Get current weather for a location")
def get_weather(location: str) -> dict:
    """Get the current weather for a location."""
    # Your weather API call here
    return {
        "location": location,
        "temperature": "22°C",
        "condition": "Sunny",
        "humidity": "65%"
    }

# Register a calculation function
@llm.register_function(description="Calculate the sum of two numbers")
def calculate_sum(a: int, b: int) -> int:
    """Calculate the sum of two numbers."""
    return a + b

Function Parameters and Types

Your functions can have any combination of parameters, and the AI will automatically understand how to use them:
@llm.register_function(description="Search for products with filters")
def search_products(
    query: str,                    # Required parameter
    category: str = "all",         # Optional with default
    min_price: float = 0.0,        # Optional with default
    max_price: float = 1000.0,     # Optional with default
    in_stock: bool = True          # Optional with default
) -> list:
    """Search for products with various filters."""
    # Your search logic here
    return [
        {"name": "Product 1", "price": 29.99, "category": "electronics"},
        {"name": "Product 2", "price": 15.50, "category": "books"}
    ]

Custom Function Names

Sometimes you want a function to have a different name when the AI calls it:
@llm.register_function(
    name="check_user_permissions",
    description="Check if a user has specific permissions"
)
def verify_user_access(user_id: str, permission: str) -> bool:
    """Internal function for checking user permissions."""
    # Your permission logic here
    return True

Using Functions in Conversation

Once registered, your functions are automatically available during conversations:
# The AI will automatically call get_weather("New York") when asked about weather
response = await llm.simple_response("What's the weather like in New York?")
print(response.text)
# Output: "The weather in New York is currently Sunny with a temperature of 22°C and 65% humidity."

# The AI can also combine multiple function calls
response = await llm.simple_response("What's the weather in London and calculate 100 + 200?")
# This will call both get_weather("London") and calculate_sum(100, 200)

Overview of an MCP Server

MCP (Model Context Protocol) servers are specialized toolboxes that provide your AI agent with access to external services and data sources.

What is an MCP Server?

An MCP server is essentially a service that:
  • Exposes tools - Functions that your AI can call
  • Provides resources - Data sources your AI can read from
  • Offers prompts - Pre-built conversation starters or templates
  • Handles authentication - Manages access to external services

Types of MCP Servers

Local MCP Servers run on your machine and communicate via standard input/output:
from vision_agents.core.mcp import MCPServerLocal

# Connect to a local MCP server
local_server = MCPServerLocal(
    command="python my_mcp_server.py",
    session_timeout=300.0
)
Remote MCP Servers run on other machines and communicate over HTTP:
from vision_agents.core.mcp import MCPServerRemote

# Connect to a remote MCP server
remote_server = MCPServerRemote(
    url="https://api.example.com/mcp/",
    headers={"Authorization": "Bearer your-token"},
    timeout=30.0,
    session_timeout=300.0
)

What MCP Servers Can Do

MCP servers can provide your AI with access to:
  • Database queries - Search and retrieve data
  • API integrations - Connect to external services
  • File operations - Read and write files
  • System commands - Execute shell commands
  • Web scraping - Extract data from websites
  • Authentication - Handle login and permissions

Connecting an MCP Server

Connecting an MCP server to your AI agent is a breeze. You simply pass it to your agent when you create it, and the agent handles all the connection details automatically.

Basic MCP Server Connection

from vision_agents.core.agents import Agent
from vision_agents.core.mcp import MCPServerRemote
from vision_agents.plugins import openai, getstream
from vision_agents.core.edge.types import User

# Create your MCP server
github_server = MCPServerRemote(
    url="https://api.githubcopilot.com/mcp/",
    headers={"Authorization": f"Bearer {github_pat}"},
    timeout=10.0,
    session_timeout=300.0
)

# Create your LLM
llm = openai.LLM(model="gpt-4o-mini")

# Create your agent with the MCP server
agent = Agent(
    edge=getstream.Edge(),
    llm=llm,
    agent_user=User(name="AI Assistant", id="assistant"),
    instructions="You are a helpful AI assistant with access to GitHub tools.",
    mcp_servers=[github_server]  # Pass your MCP server here
)

Multiple MCP Servers

You can connect multiple MCP servers to give your agent access to different services:
# Create multiple MCP servers
github_server = MCPServerRemote(url="https://api.githubcopilot.com/mcp/", ...)
weather_server = MCPServerRemote(url="https://api.weather.com/mcp/", ...)
database_server = MCPServerLocal(command="python db_mcp_server.py")

# Connect them all to your agent
agent = Agent(
    edge=getstream.Edge(),
    llm=llm,
    agent_user=User(name="Multi-Tool Assistant", id="assistant"),
    instructions="You have access to GitHub, weather, and database tools.",
    mcp_servers=[github_server, weather_server, database_server]
)

Automatic Tool Registration

When you connect an MCP server, the agent automatically:
  1. Connects to the server - Establishes communication
  2. Discovers available tools - Finds out what functions are available
  3. Registers them with the LLM - Makes them available for function calling
  4. Handles errors gracefully - Continues working even if some servers fail
You don’t need to do anything special, just create your agent and start using the tools.

Real-World Example

Here’s a complete example that brings everything together:
import asyncio
from vision_agents.core.agents import Agent
from vision_agents.core.mcp import MCPServerRemote
from vision_agents.plugins import openai, getstream
from vision_agents.core.edge.types import User

async def main():
    # Create MCP servers
    github_server = MCPServerRemote(
        url="https://api.githubcopilot.com/mcp/",
        headers={"Authorization": f"Bearer {os.getenv('GITHUB_PAT')}"},
        timeout=10.0,
        session_timeout=300.0
    )
    
    # Create LLM with custom functions
    llm = openai.LLM(model="gpt-4o-mini")
    
    @llm.register_function(description="Get current time")
    def get_current_time() -> str:
        from datetime import datetime
        return datetime.now().strftime("%Y-%m-%d %H:%M:%S")
    
    # Create agent
    agent = Agent(
        edge=getstream.Edge(),
        llm=llm,
        agent_user=User(name="GitHub Assistant", id="github-assistant"),
        instructions="You are a helpful GitHub assistant with access to repositories, issues, and more.",
        mcp_servers=[github_server]
    )
    
    # Use the agent
    response = await agent.simple_response(
        "What repositories do I have and what time is it?"
    )
    print(response.text)

if __name__ == "__main__":
    asyncio.run(main())
This example shows how you can combine custom functions with MCP servers to create a powerful AI assistant that can access both local functionality and external services.
I