Latest Articles

Model Context Protocol (MCP): Building AI-Tool Integrations That Scale

Introduction: The Model Context Protocol (MCP) is an open standard developed by Anthropic that enables AI assistants to securely connect with external data sources and tools. Think of MCP as a universal adapter that lets AI models interact with your files, databases, APIs, and services through a standardized interface. Instead of building custom integrations for each AI application, MCP provides a consistent protocol that any AI host can use to communicate with any MCP server. This guide covers the fundamentals of MCP, how to build custom servers, and practical integration patterns.

Model Context Protocol Architecture
Model Context Protocol: Connecting AI to External Systems

Core Concepts

MCP defines three primary primitives that servers can expose to AI clients. Tools are executable functions that perform actions—reading files, querying databases, sending messages. Resources provide read-only access to data like file contents, database records, or API responses. Prompts are reusable templates that guide AI behavior for specific tasks. The protocol uses JSON-RPC 2.0 over stdio or HTTP for communication, making it language-agnostic and easy to implement.

Setting Up MCP with Claude Desktop

# Install Claude Desktop (macOS/Windows)
# Download from: https://claude.ai/download

# Configure MCP servers in Claude Desktop config
# macOS: ~/Library/Application Support/Claude/claude_desktop_config.json
# Windows: %APPDATA%\Claude\claude_desktop_config.json

{
  "mcpServers": {
    "filesystem": {
      "command": "npx",
      "args": [
        "-y",
        "@modelcontextprotocol/server-filesystem",
        "/Users/yourname/Documents",
        "/Users/yourname/Projects"
      ]
    },
    "github": {
      "command": "npx",
      "args": ["-y", "@modelcontextprotocol/server-github"],
      "env": {
        "GITHUB_PERSONAL_ACCESS_TOKEN": "ghp_your_token_here"
      }
    },
    "sqlite": {
      "command": "npx",
      "args": [
        "-y",
        "@modelcontextprotocol/server-sqlite",
        "--db-path",
        "/path/to/your/database.db"
      ]
    }
  }
}

# Restart Claude Desktop after configuration changes

Building a Custom MCP Server (Python)

# Install the MCP Python SDK
# pip install mcp

from mcp.server import Server
from mcp.server.stdio import stdio_server
from mcp.types import Tool, TextContent, Resource
import json
import asyncio

# Create server instance
server = Server("my-custom-server")

# Define a tool
@server.tool()
async def search_documents(query: str, limit: int = 10) -> str:
    """Search through documents and return matching results."""
    # Your search implementation here
    results = [
        {"title": "Document 1", "snippet": f"Contains: {query}"},
        {"title": "Document 2", "snippet": f"Related to: {query}"}
    ]
    return json.dumps(results[:limit], indent=2)

@server.tool()
async def create_note(title: str, content: str) -> str:
    """Create a new note with the given title and content."""
    # Save note to your storage
    note_id = f"note_{hash(title) % 10000}"
    # In production, save to database/filesystem
    return json.dumps({
        "success": True,
        "note_id": note_id,
        "message": f"Note '{title}' created successfully"
    })

@server.tool()
async def query_database(sql: str) -> str:
    """Execute a read-only SQL query against the database."""
    import sqlite3
    
    # Safety: Only allow SELECT queries
    if not sql.strip().upper().startswith("SELECT"):
        return json.dumps({"error": "Only SELECT queries are allowed"})
    
    conn = sqlite3.connect("my_database.db")
    cursor = conn.cursor()
    
    try:
        cursor.execute(sql)
        columns = [desc[0] for desc in cursor.description]
        rows = cursor.fetchall()
        results = [dict(zip(columns, row)) for row in rows]
        return json.dumps(results, indent=2)
    except Exception as e:
        return json.dumps({"error": str(e)})
    finally:
        conn.close()

# Define resources
@server.resource("notes://list")
async def list_notes() -> str:
    """List all available notes."""
    notes = [
        {"id": "note_1", "title": "Meeting Notes"},
        {"id": "note_2", "title": "Project Ideas"}
    ]
    return json.dumps(notes)

@server.resource("notes://{note_id}")
async def get_note(note_id: str) -> str:
    """Get a specific note by ID."""
    # Fetch from your storage
    return json.dumps({
        "id": note_id,
        "title": "Sample Note",
        "content": "This is the note content..."
    })

# Define prompts
@server.prompt()
async def code_review_prompt(language: str = "python") -> str:
    """Generate a code review prompt template."""
    return f"""Please review the following {language} code for:
1. Code quality and readability
2. Potential bugs or edge cases
3. Performance considerations
4. Security vulnerabilities
5. Suggestions for improvement

Code to review:
{{code}}

Provide specific, actionable feedback."""

# Run the server
async def main():
    async with stdio_server() as (read_stream, write_stream):
        await server.run(read_stream, write_stream)

if __name__ == "__main__":
    asyncio.run(main())

Building an MCP Server (TypeScript)

// npm install @modelcontextprotocol/sdk

import { Server } from "@modelcontextprotocol/sdk/server/index.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
import {
  CallToolRequestSchema,
  ListToolsRequestSchema,
  ListResourcesRequestSchema,
  ReadResourceRequestSchema,
} from "@modelcontextprotocol/sdk/types.js";

const server = new Server(
  { name: "my-typescript-server", version: "1.0.0" },
  { capabilities: { tools: {}, resources: {} } }
);

// List available tools
server.setRequestHandler(ListToolsRequestSchema, async () => ({
  tools: [
    {
      name: "fetch_weather",
      description: "Get current weather for a location",
      inputSchema: {
        type: "object",
        properties: {
          location: { type: "string", description: "City name" },
          units: { type: "string", enum: ["celsius", "fahrenheit"] }
        },
        required: ["location"]
      }
    },
    {
      name: "send_notification",
      description: "Send a notification to a user",
      inputSchema: {
        type: "object",
        properties: {
          user_id: { type: "string" },
          message: { type: "string" },
          priority: { type: "string", enum: ["low", "medium", "high"] }
        },
        required: ["user_id", "message"]
      }
    }
  ]
}));

// Handle tool calls
server.setRequestHandler(CallToolRequestSchema, async (request) => {
  const { name, arguments: args } = request.params;

  switch (name) {
    case "fetch_weather": {
      const { location, units = "celsius" } = args as any;
      // In production, call a real weather API
      const weather = {
        location,
        temperature: units === "celsius" ? 22 : 72,
        units,
        condition: "Partly cloudy",
        humidity: 65
      };
      return { content: [{ type: "text", text: JSON.stringify(weather, null, 2) }] };
    }

    case "send_notification": {
      const { user_id, message, priority = "medium" } = args as any;
      // In production, send actual notification
      return {
        content: [{
          type: "text",
          text: JSON.stringify({
            success: true,
            notification_id: `notif_${Date.now()}`,
            delivered_to: user_id
          })
        }]
      };
    }

    default:
      throw new Error(`Unknown tool: ${name}`);
  }
});

// List resources
server.setRequestHandler(ListResourcesRequestSchema, async () => ({
  resources: [
    {
      uri: "config://app",
      name: "Application Configuration",
      mimeType: "application/json"
    }
  ]
}));

// Read resources
server.setRequestHandler(ReadResourceRequestSchema, async (request) => {
  const { uri } = request.params;

  if (uri === "config://app") {
    return {
      contents: [{
        uri,
        mimeType: "application/json",
        text: JSON.stringify({
          app_name: "My Application",
          version: "2.0.0",
          features: ["auth", "notifications", "analytics"]
        }, null, 2)
      }]
    };
  }

  throw new Error(`Resource not found: ${uri}`);
});

// Start server
const transport = new StdioServerTransport();
server.connect(transport).catch(console.error);

Available MCP Servers

ServerPurposeKey Features
@modelcontextprotocol/server-filesystemFile system accessRead/write files, directory listing
@modelcontextprotocol/server-githubGitHub integrationRepos, issues, PRs, code search
@modelcontextprotocol/server-sqliteSQLite databasesQuery execution, schema inspection
@modelcontextprotocol/server-postgresPostgreSQLFull SQL support, transactions
@modelcontextprotocol/server-slackSlack integrationMessages, channels, users
@modelcontextprotocol/server-puppeteerWeb automationScreenshots, navigation, scraping
@modelcontextprotocol/server-brave-searchWeb searchSearch queries, results

Security Best Practices

# Security considerations for MCP servers

from mcp.server import Server
import os
import re

server = Server("secure-server")

# 1. Validate and sanitize all inputs
def sanitize_path(path: str, allowed_dirs: list[str]) -> str:
    """Ensure path is within allowed directories."""
    abs_path = os.path.abspath(os.path.expanduser(path))
    
    for allowed in allowed_dirs:
        allowed_abs = os.path.abspath(allowed)
        if abs_path.startswith(allowed_abs):
            return abs_path
    
    raise ValueError(f"Path {path} is outside allowed directories")

# 2. Limit tool capabilities
@server.tool()
async def read_file(path: str) -> str:
    """Read a file from allowed directories only."""
    ALLOWED_DIRS = [
        os.path.expanduser("~/Documents"),
        os.path.expanduser("~/Projects")
    ]
    
    safe_path = sanitize_path(path, ALLOWED_DIRS)
    
    # Check file size to prevent memory issues
    if os.path.getsize(safe_path) > 10 * 1024 * 1024:  # 10MB limit
        return "Error: File too large"
    
    with open(safe_path, 'r') as f:
        return f.read()

# 3. Use read-only database connections where possible
@server.tool()
async def query_db(sql: str) -> str:
    """Execute read-only queries."""
    import sqlite3
    
    # Whitelist allowed operations
    ALLOWED_PATTERNS = [
        r"^\s*SELECT\s+",
        r"^\s*EXPLAIN\s+",
        r"^\s*PRAGMA\s+table_info"
    ]
    
    sql_upper = sql.upper()
    if not any(re.match(p, sql_upper) for p in ALLOWED_PATTERNS):
        return "Error: Only SELECT, EXPLAIN, and PRAGMA queries allowed"
    
    # Use read-only connection
    conn = sqlite3.connect("file:mydb.db?mode=ro", uri=True)
    # ... execute query

# 4. Rate limiting
from collections import defaultdict
from time import time

class RateLimiter:
    def __init__(self, max_calls: int, window_seconds: int):
        self.max_calls = max_calls
        self.window = window_seconds
        self.calls = defaultdict(list)
    
    def check(self, key: str) -> bool:
        now = time()
        self.calls[key] = [t for t in self.calls[key] if now - t < self.window]
        
        if len(self.calls[key]) >= self.max_calls:
            return False
        
        self.calls[key].append(now)
        return True

rate_limiter = RateLimiter(max_calls=100, window_seconds=60)

@server.tool()
async def expensive_operation(data: str) -> str:
    """Rate-limited operation."""
    if not rate_limiter.check("expensive_operation"):
        return "Error: Rate limit exceeded. Try again later."
    
    # Perform operation
    return "Success"

References

Conclusion

The Model Context Protocol represents a significant step toward standardized AI-tool integration. By providing a common interface for tools, resources, and prompts, MCP eliminates the need for custom integrations between every AI application and external service. Whether you’re connecting Claude to your filesystem, building a custom database interface, or creating specialized tools for your workflow, MCP provides the foundation. Start with the official servers for common use cases, then build custom servers when you need specialized functionality. The protocol’s security model—with explicit capability declarations and user consent—ensures that AI assistants can be powerful without being dangerous.


Discover more from Code, Cloud & Context

Subscribe to get the latest posts sent to your email.

About the Author

I am a Cloud Architect and Developer passionate about solving complex problems with modern technology. My blog explores the intersection of Cloud Architecture, Artificial Intelligence, and Software Engineering. I share tutorials, deep dives, and insights into building scalable, intelligent systems.

Areas of Expertise

Cloud Architecture (Azure, AWS)
Artificial Intelligence & LLMs
DevOps & Kubernetes
Backend Dev (C#, .NET, Python, Node.js)
© 2025 Code, Cloud & Context | Built by Nithin Mohan TK | Powered by Passion