In November 2024, Anthropic released the Model Context Protocol as an open specification. The pitch was simple: a standard way for AI agents to discover and use tools, so developers wouldn’t have to build custom integrations for each one.

Eighteen months later, MCP is the dominant protocol for AI tool connectivity. Here’s how it got here, what works, what’s still hard, and where it’s heading.

The Timeline

November 2024. Anthropic publishes the MCP specification and open-sources the TypeScript SDK. The protocol supports tool discovery, invocation, and resource access over stdio and HTTP with server-sent events. Early adoption is mostly within the Anthropic ecosystem: Claude Desktop, Claude Code, and a handful of third-party tools.

March 2025. OpenAI announces MCP support in the Agents SDK. This is the turning point. With both major commercial LLM providers backing the same protocol, the “which standard will win?” question is settled before it starts.

Mid-2025. Google adds MCP support to the Gemini platform. Microsoft integrates it into Copilot Studio. The major IDE tools (Cursor, Windsurf, VS Code with Copilot) add MCP client capabilities. MCP server count on the community registry passes 10,000.

Late 2025. The MCP TypeScript SDK crosses 50 million monthly npm downloads. The protocol gets a formal governance structure with contributors from Anthropic, OpenAI, Google, and independent developers.

Early 2026. Monthly SDK downloads reach roughly 97 million. The registry lists over 20,000 MCP servers. MCP is the de facto standard for AI tool connectivity.

What Works Well

Universal Client Support

Every major AI coding tool supports MCP: Claude Code, Cursor, Windsurf, VS Code Copilot, OpenAI Codex CLI, OpenClaw. If you build an MCP server, it works everywhere. This was the core promise, and it delivered.

Simple Server Development

Building an MCP server is not hard. The SDK handles the protocol layer. You define your tools (name, description, input schema), implement the handlers, and the SDK manages discovery, validation, and transport. A basic MCP server can be built in an afternoon.

Standardized Tool Descriptions

The JSON Schema-based tool descriptions are good enough for LLMs to work with. Models can read the tool name, description, and parameter schema and make reasonable decisions about when and how to call the tool. This was the hardest part to get right, and it works.

Growing Ecosystem

The ecosystem is large and diverse. MCP servers exist for databases (PostgreSQL, MongoDB, SQLite), file systems, web browsers, communication tools (Slack, email), development tools (GitHub, Jira, Linear), data APIs, and more. The long tail is filling in fast.

What’s Still Hard

Authentication

MCP has no built-in authentication standard. Each server handles auth its own way. Some use API keys in environment variables. Others use OAuth. Some require no auth at all.

For developers connecting to one or two servers, this is manageable. For agents that need to connect to many servers, the auth fragmentation is a real problem. You end up managing a different credential for each server, with different rotation policies, different scoping models, and different failure modes.

The MCP community has proposed an auth extension, but it’s not yet part of the core spec.

Discovery

How does an agent find the right MCP server for a task? Right now, the answer is mostly manual: the developer configures which servers the agent can use. There’s no standard way for an agent to search for MCP servers at runtime, compare options, or choose the best one for a given task.

Community registries (like the one at modelcontextprotocol.io) help humans find servers, but they aren’t designed for programmatic agent use. True runtime discovery, where an agent searches for “I need a tool that sends email” and gets back a ranked list of options, is still an unsolved problem.

Hosting and Operations

Running MCP servers in production requires infrastructure. You need to deploy, monitor, scale, and maintain each server. For organizations using many MCP tools, this operational overhead adds up.

This is where hosted platforms add value. Rather than running 15 MCP servers, you connect to a single platform like AgentPatch that runs the tools for you and exposes them through one MCP endpoint. The trade-off is dependency on the platform, but the reduction in operational complexity is significant.

Pricing and Billing

MCP has no concept of cost. A tool call might be free, or it might cost money. The protocol doesn’t include pricing information in tool descriptions, and there’s no standard way to handle billing.

In practice, most free MCP servers are backed by free APIs or local resources. Paid tools tend to live behind platforms that handle billing separately. But the lack of pricing in the protocol means agents can’t reason about cost when choosing between tools.

Quality and Trust

Anyone can publish an MCP server. There’s no review process, no quality standards, and no trust framework. An agent connecting to an unknown MCP server has no way to verify that it’s safe, reliable, or honest about what it does.

This is a real security concern. A malicious MCP server could return crafted responses designed to manipulate the agent’s behavior. The community is discussing trust and verification mechanisms, but nothing is standardized yet.

Where MCP Is Heading

Auth Standardization

The most requested feature. Expect a standardized auth layer within 2026, likely supporting both API keys and OAuth as first-class patterns within the protocol.

Richer Tool Metadata

Tool descriptions will get richer: pricing information, reliability metrics, latency estimates, and trust scores. This gives agents more information to make better tool selection decisions.

Streamable HTTP Transport

The current HTTP+SSE transport is being updated to a more flexible streamable HTTP model. This simplifies deployment (no persistent connections required) and makes it easier to run MCP servers behind standard load balancers and CDNs.

Federated Discovery

The ability for agents to discover MCP servers at runtime, search across registries, and evaluate options programmatically. This is the piece that turns MCP from “tools I configured” into “tools the agent finds on its own.”

The Big Picture

MCP solved the fragmentation problem. Before MCP, every AI tool integration was a one-off. Now, there’s a standard. The remaining challenges, auth, discovery, pricing, trust, are real, but they’re the kind of problems that get solved once a protocol has critical mass. MCP has that mass.

If you’re building tools for AI agents, MCP is the protocol to target. If you’re building agents, MCP gives you access to the broadest ecosystem of tools available. The details will keep evolving, but the standard itself is settled.