
Every decade or so, a new standard quietly resets how we build software. We’re at one of those moments again, and most organizations are asking the wrong question about it.
Over the last several months, I’ve had a lot of conversations about Model Context Protocol (MCP). Most of them start the same way: “Is this just another API standard?”.
It’s a fair question, but I think it misses the point. If MCP were just another API pattern, it wouldn’t matter much. We’ve seen plenty of those come and go. What makes it genuinely significant is more fundamental: it changes how systems decide what to do next.
We’ve seen this pattern before. Consider HTTP, REST, and MCP. Each shift changed not just the technology, but what teams optimize for and how systems are designed from the ground up.
| HTTP | REST | MCP |
| Access to information | Composable services | Context and capability |
HTTP
HTTP made distributed systems usable. It provided the basic architecture required to move information across a network, introducing the power of connection through hyperlinks.
While HTTP solved the initial hurdle of moving packets between points, it left the language of those interactions entirely up to the developer, leading to a fragmented web of one-off connections.
REST
This lack of structure eventually forced a shift toward REST, as the industry realized that universal access was useless without a predictable way for services to actually talk to one another.
REST made them composable and gave us the API economy. It’s also worth remembering that the organizations that were slow to adopt REST didn’t collapse overnight. They just spent years in catch-up mode, paying a compounding tax on every new integration while others moved faster with less effort.
MCP
MCP belongs in that same category of shifts, another fork in the road. This time, the shift impacts data infrastructure.
But it’s not about documents the way HTTP was, or resources the way REST was. This time, it’s about context and capability. Specifically, it’s about how intelligent systems discover what’s available and decide how to act on it. That’s a meaningfully different design center.
Again, the cost of waiting isn’t visible on day one. Teams that deferred the hard architectural decisions ended up rebuilding from scratch under pressure.
The subtle but important change
Underneath all of this, there is an important shift taking place. In a REST world, everything is pre-wired. You define endpoints, contracts, and flows. Developers decide in advance how systems interact, and those decisions get baked in.
MCP shifts that. Instead of telling a system “here’s the API, call it like this,” you’re enabling something closer to: “here’s my objective, what capabilities are available?”
We’re no longer just building software. We’re building systems that figure things out as they go.
That assumes dynamic environments, discoverable capabilities, and decisions made at runtime. It’s the architectural foundation for anything we’d call genuinely agentic, and it requires a different way of thinking about what your systems need to expose.
The integration problem we’ve normalized
Most enterprises don’t think of integration as the bottleneck. But it is.
Every AI use case ends up following the same pattern. You stitch together a few APIs, write glue code, handle edge cases, and then do it all over again for the next use case. It works, but it’s slow, expensive, and it doesn’t scale.
I’ve seen teams spend months building integrations that should have been reusable from day one. That’s the real issue MCP addresses. It standardizes how systems expose what they can do. Once that layer is consistent, you stop rebuilding the same connections over and over.
That’s where the value comes from. Not new capabilities. Just less friction.
And in practice, that kind of simplification is what actually drives velocity. That’s the business case, and it’s more compelling than any capability argument.
The part most teams are missing
There’s significant momentum around MCP right now, and that’s a good thing. Standardization at this layer is long overdue. But most of the conversation is focused on tools:
- How agents call them
- How capabilities are exposed
- How workflows get orchestrated
That’s only one side of the equation. The harder problem, the one that actually determines whether any of this works, is data. Not just access to data, but access to the right data, with the right context, under the right governance. Most enterprises don’t have that in place today. No protocol fixes it by itself.
MCP without a governed data layer doesn’t produce intelligence. It produces a faster path to bad decisions.
Connect MCP to a fragmented data environment, and you don’t get intelligence. You get partial answers, inconsistent results, and a system that looks more capable than it actually is. That’s the real risk.
When the underlying data is incomplete or poorly governed, the system doesn’t fail loudly. It still produces answers. They just aren’t grounded in reality.
At a small scale, that’s manageable. At enterprise scale, it becomes dangerous.
Because now you have a system that appears confident, is easy to use, and is embedded in decision-making but is operating on incomplete context. That’s worse than not deploying AI at all.
This is really an architecture problem
Once you see it that way, MCP stops being a protocol discussion. It becomes an architecture discussion. The way I think about it, three layers need to work together:
| Interaction layer | Reasoning layer | Data plane |
| MCP | Models & Agents | Governed context |
| Allows standardized discovery & capability exposure | Allows inference, orchestration, and decision-making | Allows federated access, semantics, and trust |
MCP sits at the interaction layer. It defines how systems discover capabilities and communicate.
Models and agents sit at the reasoning layer. They decide what to do and how to do it.
But neither of those matter without the third piece. The data layer.
That’s where context comes from. That’s where meaning is resolved. That’s where trust is established. And it’s the part most teams underestimate. It’s also the hardest to get right.
You need consistent access across cloud, data lake, and on-prem systems. You need a way to resolve meaning, not just retrieve raw data. And you need governance that holds up under scale and works in real time.
Without that, MCP doesn’t simplify your architecture. It just moves the complexity somewhere else.
What leaders should actually care about
This is where the fork in the road becomes real.
You can treat MCP as another developer tool, something to experiment with on the side. Or you can recognize it as a structural inflection point in how enterprise systems get built, and position your architecture accordingly.
Organizations that get this right move faster. The cost of building AI systems comes down. Experimentation becomes easier. And over time, the architecture gets simpler instead of more complex. They also keep flexibility. No hard dependency on a single model or vendor.
Organizations that don’t make that shift end up in a familiar place. Another layer. More moving parts. The same underlying constraints. We’ve seen this before. It’s the REST story. One cycle later.
A practical starting point
The mistake is trying to turn MCP into a big initiative. That’s how you end up with a lot of effort and very little progress. A better place to start is with the friction you already have.
- Where is integration complexity slowing teams down?
- Where are the same connections being rebuilt over and over?
- Where does lack of context lead to results you don’t trust?
Introduce MCP where the value is obvious, the risk is manageable, and the data is at least somewhat reliable. Then build from what works.
The important part is this.
Don’t separate MCP from your data and governance strategy. If those aren’t part of the design from the beginning, you’ll end up rebuilding it later.
The bottom line
HTTP made information accessible. REST made systems modular. MCP is beginning to make systems adaptive.
But the protocol itself isn’t what creates advantage. The advantage comes from how you build around it. The clarity of your architecture. The quality of the data you connect to it. And whether the system is reliable enough to trust at scale.
The companies that win with AI won’t be the ones with the best models. They’ll be the ones who figured out how to put those models to work.
Starburst Galaxy MCP is now available in Claude Desktop
Starburst is embracing this new shift to MCP, and that’s why we’ve partnered with Anthropic. You can now connect Claude Desktop directly to your Starburst Galaxy account and query your governed data lake in natural language, no SQL and no credentials stored locally.
Setting up Starburst Galaxy + Claude
Authentication is handled via OAuth, access is scoped to what each user is already permitted to see, and every query is audited through Galaxy’s existing compliance pipeline.
It takes about five minutes to set up. If you’re already a Galaxy customer, you have everything you need.
Connect Claude Desktop to Galaxy →
Continue your reading
- Claude-Starburst
- Starburst MCP Documentation
- Why the Future of AI Relies on Conversation and Context
- AI Is Replacing Business Intelligence, and Dashboards Are the First to Go
- The 3 foundations of an AI data architecture
- Starburst + AI
- Starburst Becomes First AI Platform to Support NVIDIA Vera
- White Paper: The Technical Guide for Scaling Your Agentic Workforce
- Why Context Helps Uncover Real Meaning
- Starburst: Chewing through data access is key to AI adoption
- AI Needs Both Data Access and Data Governance
- Model Context Protocol
- HTTP
- REST



