Introduction: Function calling transforms LLMs from text generators into action-taking agents. Instead of just producing text responses, models can now decide when to call external functions, APIs, or tools to accomplish tasks. This capability enables building assistants that can search the web, query databases, send emails, execute code, and interact with any system that exposes an API. This guide covers the fundamentals of function calling: defining tool schemas that models understand, handling function call responses, implementing robust error handling, managing multi-step tool chains, and building production-ready tool systems. Whether you’re using OpenAI’s function calling, Anthropic’s tool use, or open-source alternatives, these patterns will help you build reliable, capable AI systems that can take real-world actions.

Tool Definition and Schema
from dataclasses import dataclass, field
from typing import Any, Optional, Callable, get_type_hints
from enum import Enum
import inspect
import json
class ParameterType(Enum):
"""JSON Schema parameter types."""
STRING = "string"
NUMBER = "number"
INTEGER = "integer"
BOOLEAN = "boolean"
ARRAY = "array"
OBJECT = "object"
@dataclass
class Parameter:
"""Tool parameter definition."""
name: str
type: ParameterType
description: str
required: bool = True
enum: list = None
default: Any = None
items: dict = None # For array types
@dataclass
class Tool:
"""A tool that can be called by the LLM."""
name: str
description: str
parameters: list[Parameter]
function: Callable
def to_openai_schema(self) -> dict:
"""Convert to OpenAI function calling format."""
properties = {}
required = []
for param in self.parameters:
prop = {
"type": param.type.value,
"description": param.description
}
if param.enum:
prop["enum"] = param.enum
if param.items:
prop["items"] = param.items
properties[param.name] = prop
if param.required:
required.append(param.name)
return {
"type": "function",
"function": {
"name": self.name,
"description": self.description,
"parameters": {
"type": "object",
"properties": properties,
"required": required
}
}
}
def to_anthropic_schema(self) -> dict:
"""Convert to Anthropic tool use format."""
properties = {}
required = []
for param in self.parameters:
prop = {
"type": param.type.value,
"description": param.description
}
if param.enum:
prop["enum"] = param.enum
properties[param.name] = prop
if param.required:
required.append(param.name)
return {
"name": self.name,
"description": self.description,
"input_schema": {
"type": "object",
"properties": properties,
"required": required
}
}
def tool(name: str = None, description: str = None):
"""Decorator to create a tool from a function."""
def decorator(func: Callable) -> Tool:
tool_name = name or func.__name__
tool_description = description or func.__doc__ or ""
# Extract parameters from type hints
hints = get_type_hints(func)
sig = inspect.signature(func)
parameters = []
for param_name, param in sig.parameters.items():
if param_name == "self":
continue
# Get type
param_type = hints.get(param_name, str)
json_type = _python_type_to_json(param_type)
# Get description from docstring (simplified)
param_desc = f"Parameter {param_name}"
# Check if required
required = param.default == inspect.Parameter.empty
parameters.append(Parameter(
name=param_name,
type=json_type,
description=param_desc,
required=required,
default=None if required else param.default
))
return Tool(
name=tool_name,
description=tool_description,
parameters=parameters,
function=func
)
return decorator
def _python_type_to_json(python_type) -> ParameterType:
"""Convert Python type to JSON Schema type."""
type_map = {
str: ParameterType.STRING,
int: ParameterType.INTEGER,
float: ParameterType.NUMBER,
bool: ParameterType.BOOLEAN,
list: ParameterType.ARRAY,
dict: ParameterType.OBJECT
}
return type_map.get(python_type, ParameterType.STRING)
class ToolRegistry:
"""Registry of available tools."""
def __init__(self):
self.tools: dict[str, Tool] = {}
def register(self, tool: Tool):
"""Register a tool."""
self.tools[tool.name] = tool
def get(self, name: str) -> Optional[Tool]:
"""Get tool by name."""
return self.tools.get(name)
def list_tools(self) -> list[Tool]:
"""List all tools."""
return list(self.tools.values())
def to_openai_tools(self) -> list[dict]:
"""Get all tools in OpenAI format."""
return [t.to_openai_schema() for t in self.tools.values()]
def to_anthropic_tools(self) -> list[dict]:
"""Get all tools in Anthropic format."""
return [t.to_anthropic_schema() for t in self.tools.values()]
# Example tools
@tool(name="search_web", description="Search the web for information")
async def search_web(query: str, num_results: int = 5) -> list[dict]:
"""Search the web and return results."""
# Implementation would call actual search API
return [{"title": f"Result for {query}", "url": "https://example.com"}]
@tool(name="get_weather", description="Get current weather for a location")
async def get_weather(location: str, units: str = "celsius") -> dict:
"""Get weather information."""
return {"location": location, "temperature": 22, "units": units}
@tool(name="send_email", description="Send an email to a recipient")
async def send_email(to: str, subject: str, body: str) -> dict:
"""Send an email."""
return {"status": "sent", "to": to}
Function Call Execution
from dataclasses import dataclass, field
from typing import Any, Optional
import asyncio
import json
import traceback
@dataclass
class FunctionCall:
"""A function call from the LLM."""
id: str
name: str
arguments: dict
@classmethod
def from_openai(cls, tool_call: dict) -> 'FunctionCall':
"""Parse from OpenAI format."""
return cls(
id=tool_call["id"],
name=tool_call["function"]["name"],
arguments=json.loads(tool_call["function"]["arguments"])
)
@classmethod
def from_anthropic(cls, tool_use: dict) -> 'FunctionCall':
"""Parse from Anthropic format."""
return cls(
id=tool_use["id"],
name=tool_use["name"],
arguments=tool_use["input"]
)
@dataclass
class FunctionResult:
"""Result of function execution."""
call_id: str
name: str
result: Any = None
error: str = None
execution_time_ms: float = 0
def to_openai_message(self) -> dict:
"""Convert to OpenAI tool result message."""
content = json.dumps(self.result) if self.result else self.error
return {
"role": "tool",
"tool_call_id": self.call_id,
"content": content
}
def to_anthropic_message(self) -> dict:
"""Convert to Anthropic tool result."""
return {
"type": "tool_result",
"tool_use_id": self.call_id,
"content": json.dumps(self.result) if self.result else self.error
}
class FunctionExecutor:
"""Execute function calls from LLM."""
def __init__(self, registry: ToolRegistry):
self.registry = registry
self.execution_history: list[FunctionResult] = []
async def execute(self, call: FunctionCall) -> FunctionResult:
"""Execute a single function call."""
import time
start = time.time()
tool = self.registry.get(call.name)
if not tool:
result = FunctionResult(
call_id=call.id,
name=call.name,
error=f"Unknown function: {call.name}"
)
self.execution_history.append(result)
return result
try:
# Execute function
if asyncio.iscoroutinefunction(tool.function):
output = await tool.function(**call.arguments)
else:
output = tool.function(**call.arguments)
result = FunctionResult(
call_id=call.id,
name=call.name,
result=output,
execution_time_ms=(time.time() - start) * 1000
)
except Exception as e:
result = FunctionResult(
call_id=call.id,
name=call.name,
error=f"{type(e).__name__}: {str(e)}",
execution_time_ms=(time.time() - start) * 1000
)
self.execution_history.append(result)
return result
async def execute_batch(self, calls: list[FunctionCall]) -> list[FunctionResult]:
"""Execute multiple function calls in parallel."""
tasks = [self.execute(call) for call in calls]
return await asyncio.gather(*tasks)
class SafeFunctionExecutor(FunctionExecutor):
"""Executor with safety checks."""
def __init__(
self,
registry: ToolRegistry,
allowed_functions: list[str] = None,
max_execution_time: float = 30.0,
require_confirmation: list[str] = None
):
super().__init__(registry)
self.allowed_functions = set(allowed_functions) if allowed_functions else None
self.max_execution_time = max_execution_time
self.require_confirmation = set(require_confirmation or [])
self.pending_confirmations: dict[str, FunctionCall] = {}
async def execute(self, call: FunctionCall) -> FunctionResult:
"""Execute with safety checks."""
# Check if function is allowed
if self.allowed_functions and call.name not in self.allowed_functions:
return FunctionResult(
call_id=call.id,
name=call.name,
error=f"Function not allowed: {call.name}"
)
# Check if confirmation required
if call.name in self.require_confirmation:
self.pending_confirmations[call.id] = call
return FunctionResult(
call_id=call.id,
name=call.name,
error="Confirmation required. Call confirm_execution() to proceed."
)
# Execute with timeout
try:
result = await asyncio.wait_for(
super().execute(call),
timeout=self.max_execution_time
)
return result
except asyncio.TimeoutError:
return FunctionResult(
call_id=call.id,
name=call.name,
error=f"Execution timed out after {self.max_execution_time}s"
)
async def confirm_execution(self, call_id: str) -> FunctionResult:
"""Confirm and execute a pending call."""
if call_id not in self.pending_confirmations:
return FunctionResult(
call_id=call_id,
name="unknown",
error="No pending confirmation found"
)
call = self.pending_confirmations.pop(call_id)
# Remove from require_confirmation temporarily
self.require_confirmation.discard(call.name)
result = await self.execute(call)
self.require_confirmation.add(call.name)
return result
class RetryingExecutor(FunctionExecutor):
"""Executor with automatic retries."""
def __init__(
self,
registry: ToolRegistry,
max_retries: int = 3,
retry_delay: float = 1.0,
retryable_errors: list[type] = None
):
super().__init__(registry)
self.max_retries = max_retries
self.retry_delay = retry_delay
self.retryable_errors = retryable_errors or [TimeoutError, ConnectionError]
async def execute(self, call: FunctionCall) -> FunctionResult:
"""Execute with retries."""
last_error = None
for attempt in range(self.max_retries + 1):
result = await super().execute(call)
if result.error is None:
return result
# Check if error is retryable
if not self._is_retryable(result.error):
return result
last_error = result.error
if attempt < self.max_retries:
await asyncio.sleep(self.retry_delay * (2 ** attempt))
return FunctionResult(
call_id=call.id,
name=call.name,
error=f"Max retries exceeded. Last error: {last_error}"
)
def _is_retryable(self, error: str) -> bool:
"""Check if error is retryable."""
retryable_patterns = [
"timeout", "connection", "rate limit",
"temporarily unavailable", "503", "429"
]
error_lower = error.lower()
return any(p in error_lower for p in retryable_patterns)
Multi-Turn Tool Conversations
from dataclasses import dataclass, field
from typing import Any, Optional
import json
@dataclass
class ConversationMessage:
"""Message in tool conversation."""
role: str
content: str = None
tool_calls: list[FunctionCall] = None
tool_results: list[FunctionResult] = None
class ToolConversation:
"""Manage multi-turn tool conversations."""
def __init__(
self,
llm_client: Any,
executor: FunctionExecutor,
max_tool_rounds: int = 10
):
self.llm = llm_client
self.executor = executor
self.max_tool_rounds = max_tool_rounds
self.messages: list[dict] = []
async def chat(
self,
user_message: str,
system_prompt: str = None
) -> str:
"""Process user message with tool use."""
# Initialize messages
if system_prompt and not self.messages:
self.messages.append({
"role": "system",
"content": system_prompt
})
# Add user message
self.messages.append({
"role": "user",
"content": user_message
})
# Tool use loop
for round_num in range(self.max_tool_rounds):
# Get LLM response
response = await self.llm.complete_with_tools(
messages=self.messages,
tools=self.executor.registry.to_openai_tools()
)
# Check for tool calls
if not response.tool_calls:
# No tool calls - return text response
self.messages.append({
"role": "assistant",
"content": response.content
})
return response.content
# Add assistant message with tool calls
self.messages.append({
"role": "assistant",
"content": response.content,
"tool_calls": [
{
"id": tc.id,
"type": "function",
"function": {
"name": tc.name,
"arguments": json.dumps(tc.arguments)
}
}
for tc in response.tool_calls
]
})
# Execute tool calls
calls = [
FunctionCall(id=tc.id, name=tc.name, arguments=tc.arguments)
for tc in response.tool_calls
]
results = await self.executor.execute_batch(calls)
# Add tool results
for result in results:
self.messages.append(result.to_openai_message())
return "Max tool rounds reached without final response."
def get_history(self) -> list[dict]:
"""Get conversation history."""
return self.messages.copy()
def clear(self):
"""Clear conversation."""
self.messages = []
class StreamingToolConversation(ToolConversation):
"""Tool conversation with streaming responses."""
async def chat_stream(
self,
user_message: str,
system_prompt: str = None
):
"""Stream response with tool use."""
# Initialize
if system_prompt and not self.messages:
self.messages.append({
"role": "system",
"content": system_prompt
})
self.messages.append({
"role": "user",
"content": user_message
})
for round_num in range(self.max_tool_rounds):
# Stream LLM response
full_content = ""
tool_calls = []
async for chunk in self.llm.stream_with_tools(
messages=self.messages,
tools=self.executor.registry.to_openai_tools()
):
if chunk.content:
full_content += chunk.content
yield {"type": "content", "content": chunk.content}
if chunk.tool_calls:
tool_calls.extend(chunk.tool_calls)
if not tool_calls:
self.messages.append({
"role": "assistant",
"content": full_content
})
return
# Yield tool call info
yield {"type": "tool_calls", "calls": tool_calls}
# Execute tools
self.messages.append({
"role": "assistant",
"content": full_content,
"tool_calls": [tc.to_dict() for tc in tool_calls]
})
calls = [
FunctionCall(id=tc.id, name=tc.name, arguments=tc.arguments)
for tc in tool_calls
]
results = await self.executor.execute_batch(calls)
# Yield results
for result in results:
yield {"type": "tool_result", "result": result}
self.messages.append(result.to_openai_message())
class ParallelToolConversation(ToolConversation):
"""Conversation that executes independent tools in parallel."""
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.dependency_graph: dict[str, list[str]] = {}
async def chat(
self,
user_message: str,
system_prompt: str = None
) -> str:
"""Process with parallel tool execution."""
if system_prompt and not self.messages:
self.messages.append({
"role": "system",
"content": system_prompt
})
self.messages.append({
"role": "user",
"content": user_message
})
for round_num in range(self.max_tool_rounds):
response = await self.llm.complete_with_tools(
messages=self.messages,
tools=self.executor.registry.to_openai_tools(),
parallel_tool_calls=True
)
if not response.tool_calls:
self.messages.append({
"role": "assistant",
"content": response.content
})
return response.content
# Group independent calls
independent_calls = self._find_independent_calls(response.tool_calls)
# Execute in parallel batches
all_results = []
for batch in independent_calls:
calls = [
FunctionCall(id=tc.id, name=tc.name, arguments=tc.arguments)
for tc in batch
]
results = await self.executor.execute_batch(calls)
all_results.extend(results)
# Add to messages
self.messages.append({
"role": "assistant",
"content": response.content,
"tool_calls": [tc.to_dict() for tc in response.tool_calls]
})
for result in all_results:
self.messages.append(result.to_openai_message())
return "Max rounds reached."
def _find_independent_calls(self, tool_calls: list) -> list[list]:
"""Group tool calls by independence."""
# Simple implementation: all calls are independent
# More sophisticated: analyze argument dependencies
return [tool_calls]
Tool Selection and Routing
from dataclasses import dataclass
from typing import Any, Optional
from abc import ABC, abstractmethod
class ToolSelector(ABC):
"""Abstract tool selector."""
@abstractmethod
def select_tools(
self,
query: str,
available_tools: list[Tool]
) -> list[Tool]:
"""Select relevant tools for query."""
pass
class KeywordToolSelector(ToolSelector):
"""Select tools based on keyword matching."""
def __init__(self, tool_keywords: dict[str, list[str]] = None):
self.tool_keywords = tool_keywords or {}
def add_keywords(self, tool_name: str, keywords: list[str]):
"""Add keywords for a tool."""
self.tool_keywords[tool_name] = keywords
def select_tools(
self,
query: str,
available_tools: list[Tool]
) -> list[Tool]:
"""Select tools matching query keywords."""
query_lower = query.lower()
selected = []
for tool in available_tools:
keywords = self.tool_keywords.get(tool.name, [])
# Check tool name and description
if tool.name.lower() in query_lower:
selected.append(tool)
continue
# Check keywords
if any(kw.lower() in query_lower for kw in keywords):
selected.append(tool)
continue
# Check description words
desc_words = tool.description.lower().split()
if any(word in query_lower for word in desc_words if len(word) > 4):
selected.append(tool)
return selected if selected else available_tools
class SemanticToolSelector(ToolSelector):
"""Select tools based on semantic similarity."""
def __init__(self, embedding_model: Any):
self.embedder = embedding_model
self.tool_embeddings: dict[str, Any] = {}
def index_tools(self, tools: list[Tool]):
"""Create embeddings for tools."""
import numpy as np
for tool in tools:
text = f"{tool.name}: {tool.description}"
embedding = self.embedder.embed(text)
self.tool_embeddings[tool.name] = embedding
def select_tools(
self,
query: str,
available_tools: list[Tool],
top_k: int = 5,
threshold: float = 0.5
) -> list[Tool]:
"""Select semantically similar tools."""
import numpy as np
query_embedding = self.embedder.embed(query)
scores = []
for tool in available_tools:
if tool.name in self.tool_embeddings:
similarity = np.dot(
query_embedding,
self.tool_embeddings[tool.name]
)
scores.append((tool, similarity))
# Sort by similarity
scores.sort(key=lambda x: x[1], reverse=True)
# Filter by threshold and top_k
selected = [
tool for tool, score in scores[:top_k]
if score >= threshold
]
return selected if selected else available_tools[:top_k]
class LLMToolSelector(ToolSelector):
"""Use LLM to select appropriate tools."""
def __init__(self, llm_client: Any):
self.llm = llm_client
async def select_tools(
self,
query: str,
available_tools: list[Tool]
) -> list[Tool]:
"""Use LLM to select tools."""
tool_descriptions = "\n".join(
f"- {t.name}: {t.description}"
for t in available_tools
)
prompt = f"""Given this user query, select which tools would be helpful:
Query: {query}
Available tools:
{tool_descriptions}
List the names of relevant tools (comma-separated), or "none" if no tools are needed:"""
response = await self.llm.complete(prompt)
selected_names = [
name.strip().lower()
for name in response.content.split(",")
]
if "none" in selected_names:
return []
return [
tool for tool in available_tools
if tool.name.lower() in selected_names
]
class ToolRouter:
"""Route queries to appropriate tool sets."""
def __init__(self):
self.routes: list[tuple[callable, list[Tool]]] = []
self.default_tools: list[Tool] = []
def add_route(self, condition: callable, tools: list[Tool]):
"""Add a routing rule."""
self.routes.append((condition, tools))
def set_default(self, tools: list[Tool]):
"""Set default tools."""
self.default_tools = tools
def route(self, query: str) -> list[Tool]:
"""Route query to appropriate tools."""
for condition, tools in self.routes:
if condition(query):
return tools
return self.default_tools
class DynamicToolLoader:
"""Dynamically load tools based on context."""
def __init__(self, registry: ToolRegistry):
self.registry = registry
self.loaded_tools: set[str] = set()
self.tool_loaders: dict[str, callable] = {}
def register_loader(self, tool_name: str, loader: callable):
"""Register a lazy loader for a tool."""
self.tool_loaders[tool_name] = loader
async def ensure_loaded(self, tool_names: list[str]):
"""Ensure tools are loaded."""
for name in tool_names:
if name not in self.loaded_tools and name in self.tool_loaders:
tool = await self.tool_loaders[name]()
self.registry.register(tool)
self.loaded_tools.add(name)
def get_available_tools(self) -> list[str]:
"""Get list of available tool names."""
registered = set(self.registry.tools.keys())
loadable = set(self.tool_loaders.keys())
return list(registered | loadable)
Error Handling and Validation
from dataclasses import dataclass
from typing import Any, Optional
import json
from jsonschema import validate, ValidationError
class ToolValidator:
"""Validate tool calls before execution."""
def __init__(self, registry: ToolRegistry):
self.registry = registry
def validate_call(self, call: FunctionCall) -> tuple[bool, str]:
"""Validate a function call."""
tool = self.registry.get(call.name)
if not tool:
return False, f"Unknown tool: {call.name}"
# Build JSON schema for validation
schema = self._build_schema(tool)
try:
validate(instance=call.arguments, schema=schema)
return True, ""
except ValidationError as e:
return False, f"Validation error: {e.message}"
def _build_schema(self, tool: Tool) -> dict:
"""Build JSON schema from tool definition."""
properties = {}
required = []
for param in tool.parameters:
prop = {"type": param.type.value}
if param.enum:
prop["enum"] = param.enum
properties[param.name] = prop
if param.required:
required.append(param.name)
return {
"type": "object",
"properties": properties,
"required": required,
"additionalProperties": False
}
class ToolErrorHandler:
"""Handle tool execution errors."""
def __init__(self, llm_client: Any = None):
self.llm = llm_client
self.error_handlers: dict[str, callable] = {}
def register_handler(self, error_type: str, handler: callable):
"""Register error handler."""
self.error_handlers[error_type] = handler
async def handle_error(
self,
call: FunctionCall,
error: str,
context: dict = None
) -> FunctionResult:
"""Handle tool error."""
# Check for specific handler
for error_type, handler in self.error_handlers.items():
if error_type.lower() in error.lower():
return await handler(call, error, context)
# Default: ask LLM for recovery
if self.llm:
return await self._llm_recovery(call, error, context)
return FunctionResult(
call_id=call.id,
name=call.name,
error=error
)
async def _llm_recovery(
self,
call: FunctionCall,
error: str,
context: dict
) -> FunctionResult:
"""Use LLM to suggest recovery."""
prompt = f"""A tool call failed:
Tool: {call.name}
Arguments: {json.dumps(call.arguments)}
Error: {error}
Suggest how to fix this call or an alternative approach:"""
response = await self.llm.complete(prompt)
return FunctionResult(
call_id=call.id,
name=call.name,
error=f"Original error: {error}\nSuggested fix: {response.content}"
)
class ToolCallSanitizer:
"""Sanitize tool call arguments."""
def __init__(self):
self.sanitizers: dict[str, callable] = {}
def register_sanitizer(self, param_name: str, sanitizer: callable):
"""Register parameter sanitizer."""
self.sanitizers[param_name] = sanitizer
def sanitize(self, call: FunctionCall) -> FunctionCall:
"""Sanitize call arguments."""
sanitized_args = {}
for key, value in call.arguments.items():
if key in self.sanitizers:
sanitized_args[key] = self.sanitizers[key](value)
else:
sanitized_args[key] = self._default_sanitize(value)
return FunctionCall(
id=call.id,
name=call.name,
arguments=sanitized_args
)
def _default_sanitize(self, value: Any) -> Any:
"""Default sanitization."""
if isinstance(value, str):
# Remove potential injection attempts
dangerous_patterns = ["
Production Tool Service
from fastapi import FastAPI, HTTPException
from pydantic import BaseModel
from typing import Optional, Any
import asyncio
import json
app = FastAPI()
class ToolCallRequest(BaseModel):
tool_name: str
arguments: dict
class ChatRequest(BaseModel):
message: str
session_id: Optional[str] = None
system_prompt: Optional[str] = None
class ToolDefinition(BaseModel):
name: str
description: str
parameters: list[dict]
# Initialize components
registry = ToolRegistry()
# Register example tools
@tool(name="calculator", description="Perform mathematical calculations")
def calculator(expression: str) -> dict:
"""Evaluate a math expression."""
try:
result = eval(expression, {"__builtins__": {}}, {})
return {"result": result}
except Exception as e:
return {"error": str(e)}
@tool(name="current_time", description="Get the current date and time")
def current_time(timezone: str = "UTC") -> dict:
"""Get current time."""
from datetime import datetime
return {"time": datetime.now().isoformat(), "timezone": timezone}
registry.register(calculator)
registry.register(current_time)
executor = SafeFunctionExecutor(registry)
validator = ToolValidator(registry)
# Sessions
sessions: dict[str, list[dict]] = {}
@app.post("/v1/tools/call")
async def call_tool(request: ToolCallRequest) -> dict:
"""Execute a single tool call."""
call = FunctionCall(
id="direct_call",
name=request.tool_name,
arguments=request.arguments
)
# Validate
valid, error = validator.validate_call(call)
if not valid:
raise HTTPException(status_code=400, detail=error)
# Execute
result = await executor.execute(call)
if result.error:
return {"success": False, "error": result.error}
return {"success": True, "result": result.result}
@app.get("/v1/tools")
async def list_tools() -> list[dict]:
"""List available tools."""
return [
{
"name": tool.name,
"description": tool.description,
"parameters": [
{
"name": p.name,
"type": p.type.value,
"description": p.description,
"required": p.required
}
for p in tool.parameters
]
}
for tool in registry.list_tools()
]
@app.get("/v1/tools/{tool_name}/schema")
async def get_tool_schema(tool_name: str) -> dict:
"""Get tool schema."""
tool = registry.get(tool_name)
if not tool:
raise HTTPException(status_code=404, detail="Tool not found")
return tool.to_openai_schema()
@app.post("/v1/tools/register")
async def register_tool(definition: ToolDefinition) -> dict:
"""Register a new tool (for dynamic tools)."""
# This would create a tool that calls an external API
# Simplified for example
return {
"status": "registered",
"tool_name": definition.name
}
@app.get("/v1/tools/history")
async def get_execution_history(limit: int = 100) -> list[dict]:
"""Get recent tool execution history."""
history = executor.execution_history[-limit:]
return [
{
"call_id": r.call_id,
"name": r.name,
"success": r.error is None,
"execution_time_ms": r.execution_time_ms
}
for r in history
]
@app.get("/health")
async def health():
return {"status": "healthy", "tools_count": len(registry.tools)}
References
- OpenAI Function Calling: https://platform.openai.com/docs/guides/function-calling
- Anthropic Tool Use: https://docs.anthropic.com/claude/docs/tool-use
- LangChain Tools: https://python.langchain.com/docs/modules/tools/
- JSON Schema: https://json-schema.org/
Conclusion
Function calling is what transforms LLMs from conversational interfaces into capable agents. Start with clear, well-documented tool schemas—the model can only use tools it understands. Keep tool descriptions concise but specific; vague descriptions lead to incorrect tool selection. Implement robust validation before execution; never trust that the model will always produce valid arguments. Handle errors gracefully and provide informative error messages that help the model self-correct. Use rate limiting and timeouts to prevent runaway tool calls. For multi-step tasks, implement conversation loops that continue until the model decides no more tools are needed. Consider tool selection strategies when you have many tools—semantic matching or LLM-based selection can reduce context size and improve accuracy. In production, log every tool call for debugging and audit purposes. The most important insight is that tool use is a conversation: the model proposes actions, you execute them and return results, and the model decides what to do next. Design your tools to be composable, your errors to be informative, and your execution to be reliable.
Discover more from Code, Cloud & Context
Subscribe to get the latest posts sent to your email.