MCP (Model Context Protocol): The Developer Guide That Actually Explains It

I keep seeing the same pattern. Developers hear about MCP, read one paragraph that calls it “the USB-C port for AI,” nod along, and then have no idea what to actually do with that information.

Which is a problem because the Model Context Protocol is not some theoretical standard that might matter someday. It has over 97 million monthly SDK downloads. Anthropic, OpenAI, Google, Microsoft, and Amazon have all adopted it. The Linux Foundation created an entire foundation around it. And the developers who understand it are building AI workflows that make everyone else’s look like toys.

So let me do what most MCP articles fail to do. Let me actually explain it in a way that a working developer can use.


What MCP Actually Is (Without the Buzzwords)

Here is the core problem MCP solves.

You have an AI model. It is smart. It can reason, write code, analyze data. But it lives in a box. It cannot read your files, query your database, check your GitHub issues, or send a Slack message. Not without you copy-pasting context into the chat window like it is 2023.

Before MCP, every tool that wanted to connect an AI model to external services had to build custom integrations. Claude needed its own GitHub connector. ChatGPT needed its own. Cursor needed its own. Every AI app times every external service equals an absurd number of one-off integrations. The classic N times M problem.

MCP standardizes that connection. You build one MCP server for GitHub, and it works with Claude, ChatGPT, Cursor, VS Code Copilot, and any other AI tool that speaks MCP. One server, every client.

The architecture is borrowed from something most developers already know: the Language Server Protocol. LSP standardized how editors talk to language tools. You build one TypeScript language server and it works in VS Code, Neovim, Sublime, and everywhere else. MCP does the same thing but for AI-to-tool communication.


The Architecture in Plain English

MCP uses a client-server model with JSON-RPC 2.0 as the message format. There are three roles.

The host is the application you interact with. Claude Desktop, Cursor, VS Code, your custom AI app. The host creates and manages MCP clients.

The client lives inside the host. Each client maintains a one-to-one connection with a specific MCP server. It handles the handshake, protocol negotiation, and message routing. If your host connects to three MCP servers (GitHub, Slack, PostgreSQL), it runs three separate clients.

The server is the bridge between the AI and the external world. It exposes capabilities that the AI can discover and use. This is what you build or install when you want the AI to interact with a new service.

What Servers Expose: Three Primitives

Every MCP server can expose three types of capabilities.

Tools are executable actions. Think of them as functions the AI can call. create_github_issue, query_database, send_slack_message. The AI model decides when to call them based on your conversation. Each tool has a defined input schema and output format.

Resources are read-only data. Files, database schemas, configuration documents, log entries. The AI can fetch and read them but cannot modify them through this primitive. Resources give the model context without giving it a write operation.

Prompts are reusable templates. Pre-built instructions that help structure how the AI interacts with specific tools. Think of them as expert-crafted starting points for common workflows.

The distinction matters. Tools are actions (the model calls them). Resources are data (the application fetches them). Prompts are templates (the user selects them).

How Messages Travel: Transport Layer

MCP supports two transport methods.

stdio is for local integrations. The client launches the MCP server as a subprocess on your machine and communicates through standard input and output. This is what happens when you run a local MCP server in Claude Code or Cursor. Fast, simple, no network overhead.

Streamable HTTP is for remote integrations. The MCP server runs as a web service and the client connects over HTTP. This replaced the older SSE (Server-Sent Events) transport that was deprecated in mid-2025. Streamable HTTP is what enables cloud-hosted MCP servers that teams share.

For most individual developers, stdio is what you will use day to day. Streamable HTTP matters more when you are building production systems or your organization hosts shared MCP servers.


Why This Matters Right Now

I want to explain why the timing is significant because MCP did not become important gradually. It reached a tipping point.

Anthropic open-sourced MCP in November 2024. Within 48 hours, the spec repo had thousands of GitHub stars and developers started building community servers. By early 2025, OpenAI had adopted it for their Agents SDK and ChatGPT desktop app. Google followed with Gemini support. Microsoft integrated it into Copilot and VS Code natively.

Then in December 2025, something bigger happened. Anthropic donated MCP to the newly created Agentic AI Foundation under the Linux Foundation. The co-founders were Anthropic, Block, and OpenAI. The platinum members include Amazon Web Services, Bloomberg, Cloudflare, Google, and Microsoft.

That is not just adoption. That is every major AI company agreeing on a shared standard and putting it under neutral governance. It is the kind of industry alignment that usually takes years and happens in months here.

The foundation also houses two other projects worth knowing about. Goose is Block’s open-source AI agent framework built on MCP. And AGENTS.md is OpenAI’s standard for giving AI coding agents project-specific guidance (similar to CLAUDE.md if you use Claude Code).

The practical consequence: if you build an MCP server today, it works with essentially every major AI tool on the market. That was not true a year ago. It is true now.


Setting Up MCP: The Practical Part

Let me walk through how to actually add MCP servers to the tools you probably use.

Claude Code

Claude Code has the cleanest MCP setup of any tool I have used.

# Add a remote MCP server (HTTP transport)
claude mcp add notion --transport http https://mcp.notion.com/mcp

# Add a local MCP server (stdio transport)
claude mcp add brave-search -- npx -y @anthropic/mcp-server-brave-search

# Add with environment variables
claude mcp add github --transport stdio \
  --env GITHUB_PERSONAL_ACCESS_TOKEN=ghp_xxx \
  -- npx -y @modelcontextprotocol/server-github

# Add with auth header for remote servers
claude mcp add my-api --transport http https://api.example.com/mcp \
  --header "Authorization: Bearer your-token"

Servers are scoped to your project by default and stored in .claude/settings.json. Add --scope user to make a server available across all projects.

Once connected, use the /mcp slash command during a session to see your active servers, their status, and every tool they expose.

Cursor

Cursor uses a JSON config file. The format is similar to Claude Desktop.

{
  "mcpServers": {
    "github": {
      "command": "npx",
      "args": ["-y", "@modelcontextprotocol/server-github"],
      "env": {
        "GITHUB_PERSONAL_ACCESS_TOKEN": "ghp_xxx"
      }
    },
    "remote-server": {
      "transport": "sse",
      "url": "http://localhost:3001/sse"
    }
  }
}

VS Code

VS Code uses a slightly different format in .vscode/mcp.json, which is nice because you can commit it to your repo and share the setup with your team.

{
  "mcp": {
    "servers": {
      "filesystem": {
        "command": "npx",
        "args": ["-y", "@modelcontextprotocol/server-filesystem", "./src"]
      }
    }
  }
}

The important thing to understand is that the same MCP server works across all three tools. You install one server and configure it in each client. The config file format differs but the server is identical.


MCP Servers Worth Installing

There are over 5,000 MCP servers in the ecosystem now. Here are the ones I actually use and find valuable.

Filesystem gives the AI controlled access to read and write files on your system with configurable access boundaries. Useful when you want the AI to work with files outside your current project directory.

GitHub provides full repository management. The AI can create issues, read pull requests, manage files, and interact with your repos through conversation. I use this constantly for triaging issues and drafting PR descriptions.

PostgreSQL and SQLite let the AI query your databases directly. It can inspect schemas, run read queries, and help you understand your data without you having to copy-paste query results into the chat.

Slack gives the AI access to channel history, messaging, and user management. Useful for searching past conversations for context or drafting messages.

Brave Search adds web search capabilities so the AI can look up current information during your conversation.

Memory provides a knowledge graph that persists across sessions. The AI can store and retrieve information, building up context about your project over time.

For the full directory, the community maintains registries at mcp.so and mcpservers.org. Browse by category, check the star count and maintenance status, and read the security notes before installing anything from an unknown author.


The Three-Layer Protocol Stack

This is the part most articles skip, and it is where things get genuinely interesting for the future of AI development.

MCP is one layer in what is becoming a three-layer architecture for agentic AI.

Layer 1: MCP (Agent to Tool). This is what we have been discussing. A single AI agent connects to tools and data sources through MCP servers. It handles resource access, tool execution, and context delivery.

Layer 2: A2A, Agent to Agent. This is Google’s protocol, launched in April 2025, that handles communication between multiple AI agents. If MCP is how an agent talks to tools, A2A is how agents talk to each other. An orchestrator agent can delegate a research task to a specialized research agent, which can delegate a data query to an analytics agent. Each agent maintains its own MCP connections but coordinates with peers through A2A.

A2A uses a discovery mechanism called Agent Cards, hosted at /.well-known/agent.json, so agents can find and understand each other’s capabilities automatically.

Layer 3: WebMCP (Agent to Web). This is an emerging proposed standard that lets websites declare their capabilities as structured tools that AI agents can call directly in the browser. Instead of an agent scraping a travel site’s UI to book a flight, the site publishes an MCP-compatible interface that the agent calls programmatically. Chrome 146 Canary shipped with built-in WebMCP support in February 2026.

The analogy to TCP/IP is deliberate. Each layer has clear responsibilities, can evolve independently, and protocols at each layer can be swapped without breaking the others. The flow goes: user intent captured at the UI layer, A2A delegates between agents, MCP executes tool calls, and responses render back up through the stack.

We are still early in this stack. A2A is at version 0.3 and WebMCP is in preview. But the architectural direction is clear, and understanding it now gives you a significant advantage in building AI-powered systems that will age well.


Security: The Part Nobody Wants to Talk About

I need to be direct about this because most MCP articles hand-wave over security with a sentence about “following best practices.” The reality is more concerning.

Research has found that 43% of MCP servers contain command injection vulnerabilities that could enable remote code execution. About 5% of open-source MCP servers have been seeded with tool poisoning attacks where manipulated tool descriptions trick the AI into unsafe actions. And roughly 22% of servers allow file system access outside their intended boundaries.

Here are the specific attack categories you need to understand.

Tool poisoning is when an attacker crafts a malicious MCP server (or compromises an existing one) with tool descriptions designed to manipulate the AI’s behavior. The tool looks legitimate but its metadata contains instructions that redirect the AI’s actions.

Supply chain attacks target the MCP server distribution chain. The most notable example was the 2025 Postmark MCP breach where hackers backdoored an npm package so that compromised MCP servers would BCC every outgoing email to the attackers. This is real, not theoretical.

Rug pull attacks are when a server changes its behavior after initial trust is established. The tool definitions shift between calls, and actions you approved earlier now do something different.

The CVE to know about: CVE-2025-6514 was a critical command injection bug in mcp-remote where malicious servers could send crafted authorization endpoints that achieved remote code execution on the client machine.

What to Do About It

The MCP spec has evolved its security model significantly. OAuth 2.1 with mandatory PKCE was added in March 2025. The June 2025 update formally separated MCP servers from authorization servers and required Protected Resource Metadata (RFC 9728).

But spec-level security only helps if you practice it.

Audit before you install. Check the source code, star count, maintenance history, and author reputation of any community MCP server. The same caution you apply to npm packages applies here, arguably more because MCP servers have access to your AI agent’s execution context.

Enforce least privilege. Give each MCP server access to only what it needs. The filesystem server should only see the directories you specify. The database server should use a read-only connection unless writes are genuinely needed.

Use human-in-the-loop for sensitive operations. Tools that can send emails, create issues, modify files, or execute code should require your approval before the AI acts. Most MCP clients support approval workflows. Use them.

Watch for unexpected behavior. If an MCP server starts suggesting actions you did not ask about, asking for permissions it should not need, or behaving differently than it did before, disconnect it and investigate.

The security landscape for MCP is improving fast, but right now it resembles the early npm ecosystem. Useful, powerful, and requiring a healthy dose of skepticism about what you install.


Building Your Own MCP Server

If you want to expose your own tools or services to AI agents, building an MCP server is surprisingly straightforward.

Here is a minimal example in TypeScript.

import { McpServer, ResourceTemplate } from '@modelcontextprotocol/sdk/server/mcp.js';
import { StdioServerTransport } from '@modelcontextprotocol/sdk/server/stdio.js';
import { z } from 'zod';

const server = new McpServer({
  name: 'my-project-server',
  version: '1.0.0',
});

// Expose a tool
server.tool(
  'get_project_stats',
  'Returns current project statistics',
  { project: z.string().describe('Project name') },
  async ({ project }) => {
    const stats = await fetchProjectStats(project);
    return {
      content: [{ type: 'text', text: JSON.stringify(stats, null, 2) }],
    };
  }
);

// Expose a resource
server.resource(
  'project-readme',
  new ResourceTemplate('project://{name}/readme', { list: undefined }),
  async (uri, { name }) => ({
    contents: [{ uri: uri.href, text: await readReadme(name) }],
  })
);

// Connect via stdio
const transport = new StdioServerTransport();
await server.connect(transport);

The SDK handles all the protocol negotiation, capability advertisement, and message routing. You define your tools and resources, and the SDK does the rest.

SDKs are available in TypeScript, Python, Java, Kotlin, C#, Go, Rust, Ruby, PHP, and Swift. The TypeScript and Python SDKs are the most mature and best documented.


Where This Is Going

The MCP specification follows a regular release cycle. The November 2025 release added async operations, statelessness support, and server identity. The next major release is planned for around June 2026, focusing on transport scalability, richer agent communication primitives, and enterprise readiness features.

Gartner predicts that 40% of enterprise applications will include task-specific AI agents by end of 2026, up from under 5% in 2025. MCP is the connective tissue that makes those agents useful. An AI agent without tool access is a chatbot. An AI agent with MCP connections to your actual infrastructure is a teammate.

The developers who understand this protocol now will have a significant head start. Not because MCP is hard to learn (it is not), but because the architectural patterns you build around it compound over time. The MCP servers you set up, the workflows you design, and the custom integrations you create all make the AI more capable in your specific environment.

If you have been using AI tools by pasting context into chat windows, MCP is the upgrade that changes everything about that workflow. Set up a couple of servers, see the difference, and you will not go back.


Getting Started Today

If you want to try MCP right now, here is the path of least resistance.

  1. Pick one tool you already use (Claude Code, Cursor, or VS Code)
  2. Add the GitHub MCP server with a personal access token
  3. Start a conversation and ask the AI to look at your recent issues or PRs
  4. Notice how different the experience feels when the AI has direct access versus when you paste a link

That is the moment it clicks. Not reading about MCP, but feeling the difference between an AI that can see your tools and one that cannot.

From there, add servers for the services you use most. Database, Slack, filesystem, whatever fits your workflow. Each one makes the AI more capable and your conversations more productive.

The MCP specification is the authoritative reference. The MCP server registry has the official and community servers. And the SDKs have solid getting-started guides for building your own.