Every AI agent that needs to do something useful, like search the web, send an email, or query a database, has to connect to external tools. Until recently, every connection was a custom integration. Each tool had its own API format, its own authentication flow, and its own way of describing what it could do.
The Model Context Protocol (MCP) changes this. It defines a standard interface for how AI agents discover and use tools. Think of it as USB for AI: a single protocol that lets any agent talk to any tool, without writing bespoke integration code for each one.
Where MCP Came From
Anthropic released MCP as an open specification in November 2024. The goal was to solve a problem their own teams kept running into: every time they wanted Claude to use a new tool, someone had to build a connector from scratch. The protocol was designed to be model-agnostic from the start, and that bet paid off. OpenAI announced MCP support in early 2025. Google and Microsoft followed. By mid-2025, MCP had become the de facto standard for connecting AI agents to external capabilities.
The spec is open source and maintained by Anthropic, with contributions from across the industry. As of early 2026, the MCP SDK for TypeScript sees roughly 97 million monthly downloads on npm.
How MCP Works
The protocol has two sides: servers and clients.
An MCP server is any service that exposes tools for AI agents to use. It publishes a list of tools, each with a name, description, and JSON Schema describing the expected input and output. When an agent wants to use a tool, it sends a request to the server with the tool name and input data, and gets back a structured response.
An MCP client is the AI agent (or its runtime). The client connects to one or more MCP servers, discovers what tools are available, and calls them as needed during a conversation or task.
The flow looks like this:
- Connection. The client connects to an MCP server over stdio or HTTP with server-sent events (SSE).
- Discovery. The client calls
tools/listto get a catalog of available tools with their schemas. - Selection. The LLM reads the tool descriptions and decides which tool (if any) to call based on the current task.
- Invocation. The client sends a
tools/callrequest with the tool name and input parameters. - Response. The server executes the tool and returns a structured result.
This cycle can repeat multiple times within a single agent task. The agent might search the web, read the results, then call a different tool to summarize or act on what it found.
What Makes MCP Different from Just Using APIs
You could build all of this with REST APIs. Many people did, before MCP existed. The difference is standardization.
Without MCP, every tool provider defines its own way of listing capabilities, accepting input, and returning output. The agent (or its developer) needs custom code for each tool. Add a new tool, write a new adapter.
With MCP, the contract is fixed. Any tool that implements the MCP server spec can be used by any agent that implements the MCP client spec. No adapter code. No per-tool integration work.
This is the same pattern that made USB successful. Before USB, every peripheral had its own connector. After USB, you just plugged things in. MCP does the same thing for AI tool access.
Examples of MCP Servers
MCP servers cover a wide range of capabilities:
- File system access. Read and write local files, navigate directories.
- Database queries. Connect to PostgreSQL, SQLite, or other databases and run queries.
- Web search. Search Google, Bing, or other engines and return structured results.
- Code execution. Run Python, JavaScript, or shell commands in a sandboxed environment.
- Communication. Send emails, Slack messages, or other notifications.
Some MCP servers expose a single tool. Others, like AgentPatch, act as aggregators: one MCP connection gives your agent access to dozens of tools (web search, image generation, email, data APIs) through a single server. This reduces the number of MCP connections an agent needs to manage and simplifies authentication, since the agent only needs one API key.
What MCP Doesn’t Solve (Yet)
MCP handles tool discovery and invocation well. A few areas are still maturing:
- Authentication. The spec doesn’t prescribe a single auth model. Some servers use API keys, others use OAuth, and the experience varies.
- Discovery across servers. Finding which MCP servers exist and what they offer is still informal. There’s no central registry that agents can query programmatically.
- Pricing and billing. MCP has no built-in concept of cost. If a tool call costs money, that’s handled outside the protocol.
These gaps are being addressed by the community and by platforms building on top of MCP. But they’re worth knowing about if you’re evaluating MCP for production use.
When to Use MCP
MCP is a good fit when you’re building an AI agent that needs to call external tools, and you want to avoid writing and maintaining custom integrations for each one. It’s especially useful when:
- You want your agent to discover tools at runtime rather than hardcoding them.
- You’re using multiple tools from different providers and want a consistent interface.
- You want to swap tools in and out without changing your agent’s code.
If your agent only ever calls one API, MCP might be more structure than you need. But for agents that use multiple tools, or where the tool set might change over time, the standardization is worth it.
Getting Started
The MCP specification and SDKs are open source. The TypeScript SDK is on npm (@modelcontextprotocol/sdk), and there are implementations in Python, Go, and other languages. Anthropic maintains a list of known MCP servers at modelcontextprotocol.io.
If you want to try connecting an agent to tools via MCP without setting up individual servers, platforms like AgentPatch let you connect once and access 50+ tools through a single MCP endpoint. That’s a quick way to see the protocol in action before building your own servers.