If you’ve spent any time on AI Twitter or in developer circles lately, you’ve probably noticed the buzz around MCP (Model Context Protocol). It's being positioned as the next big thing: a universal adapter for agents, a new integration paradigm, a breakthrough in interoperability.
While those ideas are exciting, they might also cause some confusion. So here’s a grounded take on what MCP actually is, what it isn’t, and when it genuinely makes sense to use it.
What MCP is: a universal connector
The best mental model I’ve found is simple: MCP is the USB-C of the AI ecosystem.
It gives you a standardised way to expose capabilities to different AI clients without forcing those clients to rebuild every integration from scratch. Plug-and-play. Predictable. Convenient.
For end users, this feels incredibly tangible:
click to activate a server → instantly expand what your agent can do.
What MCP isn’t: the obvious default for every AI product
A misconception we see often is that MCP should replace custom tool integrations for all agentic systems. That’s not true.
If you:
- own the chat interface,
- own the agent orchestration
- own the tools or APIs behind it,
… then MCP adds little to no value.
The real strength of MCP only appears when many external parties need to consume your capabilities. And the reality is: most products aren’t in that situation (yet).
So why is everyone so hyped?
People are introduced to the value of MCP by integrating widely adopted tools like Notion, Google Calendar, Jira into their workstreams — tools that already expose MCP servers.
In that universe MCP feels obvious.
But in the world of building AI-powered products, the picture is very different.
Where MCP does shine
There is one scenario where MCP becomes genuinely powerful:
Companies that don’t want to build AI agents, but do want to offer their capabilities to agents.
Think of:
- A travel platform exposing hotel-search capabilities
- A logistics company exposing their services and options
These companies don’t need to build full AI products.
All they need is a capability layer that any agent can consume.
In other words: don’t build the AI, fuel the AI.
For many organisations, this could be a smarter strategic play than yet another custom chatbot. Meet the users where they are — ChatGPT, Claude, Cursor, … — and stick to what your company has proven to be its core focus.
The real limitations you don’t see in the hype
Let’s talk about the parts of MCP that limit the technology from being the default integration paradigm for agents.
1. Authentication & authorisation are extremely shallow
Today, an MCP client authenticates once and then gains access to everything that server exposes. There’s no user-level permissions, no granularity, no fine control.
For enterprise scenarios, that’s a non-starter.
2. Enormous context overhead
This is arguably the biggest practical issue.
Connecting an MCP server means injecting all its tools into your system prompt.
Twenty servers with ten tools each? That’s 200 tools and their documentation dumped into context.
This leads to:
- unnecessary token usage
- slower inference
- degraded answer quality
The more context you inject, the worse most models perform, yet MCP forces an all-or-nothing context load.
3. No functional separation of concerns
At Nimble, we learned that the best way to separate AI agent concerns and decide on its tool access is not through technical domains, but through specific intents and problems-to-be-solved.
This is not how MCP operates: Instead, it gives full general-purpose access to your platform, but no instruction on how your use case should actually consume those resources and tools.
What about Agent-to-Agent?
Some discussions go beyond MCP and touch on the proposed Agent-to-Agent (A2A) protocols as a solution to some of these problems: instead of loading all MCP servers upfront, have an agent dynamically select the right one at runtime.
In theory, this allows for full interoperability and would reduce upfront context bloat.
In practice, this concept is still very much experimental.
You’re adding another abstraction layer, another failure point, and another agent whose reasoning you need to trust. We’re far from seeing stable, production-grade implementations of this technology.
Where MCP will likely evolve
Despite the limitations, MCP has a promising future. The pieces that need to mature are clear:
- granular access control
- context pruning
- selective tool exposure
- smarter server configuration patterns
We expect the ecosystem to move toward a distributed landscape of domain-specific capability servers, accessible to any major model or client.
What we’re exploring at Nimble
We’ve already been experimenting with ChatGPT apps, which are effectively a thin layer on top of MCP. It’s a compelling model: instead of building a UI, you design an interaction model that lives directly inside ChatGPT, Claude, Cursor, or Cloud Desktop.
For some products, that may become the more natural distribution channel than a standalone interface.
Check out our work on ChatGPT apps here and get started by using our Figma UI Kit!
Final thoughts
MCP is a meaningful step toward interoperability in the agentic era.
But it’s not a universal integration layer, and certainly not something every AI product needs.
Used in the right context - particularly for companies that want to expose capabilities rather than build agents - MCP can be transformative. But adopting it blindly because it’s the hype of the moment is a fast track to unnecessary complexity.
The challenge now is knowing when not to use it.
