Skip to content
v0.1.0 ReleasedStar us on GitHub
Technical Guide

MCP (Model Context Protocol) Explained

Everything you need to know about the protocol that connects AI models to external tools and data.

January 14, 202612 min readMCP, Technical, AI Agents

Model Context Protocol (MCP) is an open standard that enables AI models to connect with external tools, data sources, and services. Think of it as a USB-C for AIβ€”a universal connector that lets any AI model work with any compatible tool.

Originally developed by Anthropic for Claude, MCP has become the de facto standard for building AI agents that need to interact with the real world. This guide explains what MCP is, how it works, and how you can use it to build powerful AI applications.

1. What is MCP?

Model Context Protocol (MCP) is an open protocol that standardizes how AI models interact with external systems. It defines a common interface for:

  • Tools β€” Functions the AI can call (e.g., search a database, send an email)
  • Resources β€” Data the AI can read (e.g., files, API responses, system state)
  • Prompts β€” Pre-built templates for common tasks

Before MCP, every AI integration required custom code. If you wanted Claude to query your database and GPT to use the same integration, you'd write two different implementations. MCP eliminates this duplication by providing a single standard.

The USB-C Analogy

Just as USB-C lets any device connect to any peripheral through a standard port, MCP lets any AI model connect to any tool through a standard protocol. Write once, use everywhere.

2. Why MCP Matters

For Developers

  • Write once, deploy everywhere β€” One integration works with Claude, GPT, Llama, and any MCP-compatible model
  • Standardized interfaces β€” No more reverse-engineering different tool-calling formats
  • Growing ecosystem β€” Thousands of pre-built MCP servers available

For Organizations

  • Vendor flexibility β€” Switch between AI providers without rewriting integrations
  • Consistent security model β€” One protocol to audit and secure
  • Future-proof architecture β€” New models automatically work with existing tools

For the AI Ecosystem

  • Interoperability β€” Different AI systems can share capabilities
  • Innovation β€” Tool builders focus on functionality, not compatibility
  • Composability β€” Combine multiple MCP servers for complex workflows

3. Core Concepts

MCP Servers

Programs that expose tools, resources, and prompts to AI models through a standardized interface.

Tools

Functions that AI models can call to perform actions like querying databases or calling APIs.

Resources

Data sources that AI models can read, like files, databases, or live system state.

Prompts

Pre-defined prompt templates that servers can expose for common use cases.

MCP Servers

An MCP server is a program that exposes capabilities to AI models. It can be:

  • A local process (communicating via stdio)
  • A remote service (communicating via HTTP/SSE)
  • A containerized application

MCP Clients

MCP clients are AI applications that connect to MCP servers. Examples include:

  • Claude Desktop
  • Claude Code (CLI)
  • Custom applications using MCP SDKs

4. How MCP Works

MCP uses a request-response model over JSON-RPC. Here's the typical flow:

1. Client connects to MCP server
2. Server advertises available tools/resources
3. AI model decides to use a tool
4. Client sends tool call request to server
5. Server executes the tool
6. Server returns result to client
7. Client provides result to AI model

Example: Database Query Tool

// Tool definition advertised by server
{
  "name": "query_database",
  "description": "Execute a read-only SQL query",
  "inputSchema": {
    "type": "object",
    "properties": {
      "query": {
        "type": "string",
        "description": "SQL SELECT query to execute"
      }
    },
    "required": ["query"]
  }
}

// Tool call from client
{
  "method": "tools/call",
  "params": {
    "name": "query_database",
    "arguments": {
      "query": "SELECT * FROM users WHERE active = true"
    }
  }
}

// Response from server
{
  "content": [
    {
      "type": "text",
      "text": "[{id: 1, name: 'Alice'}, {id: 2, name: 'Bob'}]"
    }
  ]
}

5. Building MCP Servers

MCP servers can be built in any language. The official SDKs support TypeScript and Python, with community implementations for Go, Rust, and others.

TypeScript Example

import { Server } from "@modelcontextprotocol/sdk/server";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio";

const server = new Server({
  name: "my-mcp-server",
  version: "1.0.0"
});

// Define a tool
server.setRequestHandler("tools/list", async () => ({
  tools: [{
    name: "get_weather",
    description: "Get current weather for a city",
    inputSchema: {
      type: "object",
      properties: {
        city: { type: "string" }
      },
      required: ["city"]
    }
  }]
}));

// Handle tool calls
server.setRequestHandler("tools/call", async (request) => {
  if (request.params.name === "get_weather") {
    const city = request.params.arguments.city;
    const weather = await fetchWeather(city);
    return { content: [{ type: "text", text: weather }] };
  }
});

// Start server
const transport = new StdioServerTransport();
await server.connect(transport);

6. Security Considerations

MCP servers can perform powerful actions, making security critical:

  • Principle of least privilege β€” Only expose necessary capabilities
  • Input validation β€” Validate all parameters before execution
  • Authentication β€” Require credentials for sensitive operations
  • Rate limiting β€” Prevent abuse through request throttling
  • Audit logging β€” Record all tool calls for review

Security Warning

MCP tools can execute arbitrary actions. Never deploy MCP servers in production without proper governance, input validation, and access controls. Consider using a control plane like Cordum to enforce policies before tools execute.

7. MCP and Governance

Raw MCP gives AI models direct access to toolsβ€”powerful but dangerous for production use. Adding a governance layer between the AI and MCP servers enables:

  • Policy enforcement β€” Block or constrain dangerous tool calls
  • Approval workflows β€” Require human sign-off for sensitive operations
  • Audit trails β€” Record what tools were called and why
  • Rate limiting β€” Prevent runaway tool usage

Architecture with Governance

AI Model β†’ Control Plane β†’ MCP Servers β†’ External Systems
              ↓
         Policy Check
         Approval Gate
         Audit Log

This is exactly what Cordum providesβ€”a governance layer that's MCP-native. Every tool call is evaluated against your policies before execution.

Getting Started with MCP

Ready to build with MCP? Here are your next steps:

  1. Explore existing servers β€” Browse the MCP server directory
  2. Build your own β€” Use the official SDKs to create custom integrations
  3. Add governance β€” Deploy Cordum for production-grade policy enforcement