MCP
The Model Context Protocol is the open standard that lets AI models connect to external tools and services. Instead of building a custom integration for every tool, you build one protocol — and every tool that speaks MCP becomes available to every model that speaks MCP. It's the USB port for AI.
What MCP Is
MCP is a protocol specification, originally developed by Anthropic, that standardises how AI models interact with external tools, data sources, and services. It turns the N-times-M integration problem into an N-plus-M one.
Before MCP, every combination of AI model and external tool required its own integration code. If you had five models and ten tools, you needed fifty integrations. MCP changes that equation. Each model implements the MCP client spec once. Each tool implements the MCP server spec once. Now any model can talk to any tool through the same protocol.
I think of MCP as the practical layer that makes agentic AI actually useful. A model on its own can reason and generate text. But the moment you want it to check a database, send an email, query an API, or read a file — it needs tools. MCP is how those tools get connected in a way that's standardised, discoverable, and portable across different AI systems.
The protocol defines three core primitives: tools (actions the model can invoke), resources (data the model can read), and prompts (reusable prompt templates the server can offer). Together, these cover the vast majority of what an AI system needs from the outside world.
Why MCP Matters
The alternative to a standard protocol is a tangle of bespoke integrations. I've seen that tangle, and it doesn't scale.
Interoperability
Build a tool once and it works with any MCP-compatible model or agent framework. Switch from one AI provider to another without rewriting your integrations. The protocol abstracts away the model layer so your tool investments stay portable.
Composability
MCP servers are modular. You can mix and match them — a Google Workspace server alongside a database server alongside a custom internal tool server. Each one is independent, and the agent discovers what's available at runtime through the protocol's built-in capability negotiation.
Ecosystem Momentum
The MCP ecosystem is growing fast. There are already hundreds of open-source MCP servers covering everything from cloud storage to CRMs to developer tools. That means less custom code for you and faster time to a working system.
Security Boundaries
MCP formalises the boundary between the AI model and external systems. Every tool call goes through the protocol, which means you have a single point where you can enforce approval gates, audit logging, rate limiting, and access controls. One protocol, one enforcement layer.
Three Types of MCP
Not all MCP servers are the same. Where they run and who they serve determines their security profile, performance characteristics, and governance requirements.
Administrative MCP
Internal admin operations — file management, database maintenance, system configuration. High privilege, tightly controlled, typically operator-facing.
Learn more →Local MCP
Servers running on your machine or local network. Low latency, no data leaves your environment. Development tools, local files, on-premises databases.
Learn more →Remote MCP
Servers accessed over the network — third-party APIs, cloud services, SaaS integrations. Broader reach, but with authentication, latency, and trust considerations.
Learn more →Pairs With
MCP is the connectivity layer. Here's how it connects to the other building blocks in a production AI system.
Agents
Agents decide what to do. MCP gives them the tools to do it. An agent without MCP is a brain without hands — it can reason and plan but can't act on anything outside its own context window. MCP is what turns a language model into a capable actor in your environment.
Safety
Every MCP tool call is a potential side effect — a database write, an email sent, a file modified. Safety layers wrap around MCP to enforce approval gates, scope constraints, and audit trails. The more tools you expose, the more important these controls become.
Context
MCP servers can serve as context providers. A RAG system might be exposed as an MCP resource. A knowledge base might surface through MCP's resource primitive. The line between "tools the agent calls" and "context the agent reads" often runs through MCP.
Observability
Because all external interactions flow through the MCP protocol, you get a natural instrumentation point. Every tool call, every resource read, every parameter — all available for logging, monitoring, and debugging without invasive code changes.
Need help with MCP integration?
I design and build MCP-based tool ecosystems — from selecting the right servers to building custom ones for your internal systems. If you're figuring out how to connect your AI to the rest of your stack, I can help.