
The way we interact with data is changing from a passive consumption of dashboards to a continuous conversation with intelligent agents. However, for this shift to work at scale, the data foundation must evolve too, providing more than just raw rows and columns. It must provide a governed context to supply the necessary contextual data to AI models.
This is why the Starburst Model Context Protocol (MCP) Server is a critical component of our AI platform. It acts as the primary Agent data Gateway, connecting the agent’s reasoning to both the enterprise’s context and data to meet the performance demands of this new interaction model.
Starburst is uniquely suited to meet this need. And our platform is expanding.
Starburst announces immediate support for NVIDIA Vera
To this end, today we are announcing immediate support for NVIDIA Vera. In doing so, we become the first AI platform optimized for NVIDIA’s new inference compute platform, enabling customers to run real-time AI-based analytics on governed, federated data at unprecedented speed.
This pairing combines the best of both worlds, providing access, governance, and performance. It’s part of a larger platform alignment between Starburst and Nvidia, and was announced today at the Nvidia CEO Jansen Huang’s Keynote speech for Nvidia’s GTC AI conference.

Let’s look at how Starburst MCP Server leverages NVIDIA Vera in practice.
How Starburst MCP Server provides business context and structured access for agents
What is Starburst MCP Server? At its heart, it offers a transition from BI technologies to BI insights using AI-powered ones. This transition is a structural shift in how people interact with information. Traditional BI relied on static dashboards and centralized warehouses, but agentic AI requires a more continuous and conversational model.
In contrast, the Starburst MCP Server acts as the secure interface that allows agents to skip the operational overhead of the BI layer. By supporting the MCP, the server provides a standardized way to retrieve both the business context and the data more effectively.
This server enables capabilities that move beyond simple text-to-SQL.
- By exposing a standardized endpoint, the Starburst MCP Server enables external agents such as Claude, ChatGPT, and others to navigate catalogs and sample data products. This approach grounds the SQL generated within the shared business context and governed definitions needed to ensure accuracy.
- Lightweight Data Sampling Agents can inspect columns and sample rows to build the necessary context for more accurate reasoning.
- Our Smart Routing Endpoint for AI Workloads automatically selects the most efficient Galaxy execution path, ensuring agents receive answers at the speed required for real-time interaction.
Turning context into knowledge
To be effective, an AI agent needs more than just a raw data feed. It needs the deep organizational context that has historically been trapped in siloed reports.
The Starburst AI platform solves this problem. It turns institutional knowledge into a shared, governed data context that AI models can ingest. By organizing data by business domain and leveraging data products, with embedded ownership and policies, we ensure that AI operates on curated assets rather than raw records.
The Starburst MCP Server is the gateway that makes this layer natively consumable by any external agent ecosystem.
How Starburst MCP Server connects with NVIDIA Vera
While the MCP Server provides the standardized software interface for agents, NVIDIA Vera provides the high-velocity inference engine that powers them. NVIDIA Vera is purpose-built for the inference era, combining optimized CPUs with licensed Groq Language Processing Unit (LPU) technology for deterministic, energy-efficient processing.
By optimizing MCP Server for this new architecture, we ensure the transition from traditional BI to AI-powered Business Intelligence is supported by both enterprise-grade governance and unprecedented compute efficiency.
What are the benefits of Starburst MCP Server and NVIDIA Vera?
This approach removes the performance constraints that typically limit AI adoption at scale, allowing the following benefits.
Federated access at inference speed
Vera enables Starburst to query federated data directly, wherever it lives across lakes, warehouses, and operational systems. This eliminates the ETL bottleneck and the need to centralize data before it can power RAG pipelines or agentic workflows.
Deterministic throughput for reasoning
Vera’s architecture provides predictable performance for mixed workloads. This ensures that concurrent analytical queries and AI inference tasks maintain consistent throughput, even when operating across complex and multi-source datasets.
Governance as a foundation for scaling AI
In the agentic era, governance cannot be an afterthought. The Starburst MCP Server ensures that every AI agent interacts with the platform as a governed role.
Every query executed through the gateway inherits the same fine-grained access controls and auditability that protect your human users. This allows data engineers and platform owners to safely expand AI access across the organization without widening the risk profile.
By moving governance from the dashboard to the foundation, we enable a model in which access can grow alongside the system’s intelligence.
Testing Starburst MCP Server and NVIDIA Vera in practice
The utility of the Starburst MCP Server lies in its ability to turn a complex, distributed data estate into a single, stateless HTTP endpoint for programmatic access. This value is only extended with the integration with NVIDIA Vera.
Let’s look at how this works in practice.
How do users make use of Starburst MCP Server?
For platform owners, enabling this gateway is a straightforward configuration within the Starburst coordinator. Once starburst.mcp.enabled=true is set, the platform exposes a standardized interface that any MCP-compliant agent can immediately utilize for discovery and analysis.
From a product perspective, we have designed the MCP Server to ensure that agentic access is both high-performance and strictly read-only.
In production, this includes the following technical guardrails.
Authenticated programmatic access
The server supports the same enterprise-grade authentication methods as the rest of the platform, including OAuth 2.0 and OpenID Connect.
By exposing an RFC 8414-compliant metadata endpoint, we enable dynamic client registration, allowing agents to automatically discover the required authorization scopes and security protocols.
Synchronous structured results
Agents receive query results in a structured JSON format that includes both the raw data and the associated column metadata.
This synchronous execution model ensures the agent waits for the full context before proceeding with its reasoning trace, reducing the risk of hallucinations caused by partial data.
Architectural safety rails
To prevent agents from overwhelming system resources or inadvertently modifying the data lake, the MCP server natively rejects non-read-only operations such as DELETE, TRUNCATE, or DROP. Furthermore, administrators can configure mcp.query.max-result-size to cap the data returned to the agent, ensuring that the context remains high-signal and low-noise.
By providing these native configuration properties, we allow teams to move beyond experimental low-context AI setups. Instead, they can deploy a monitored, governed, and highly tuned gateway that treats the AI agent as a sophisticated yet controlled consumer of enterprise intelligence.
Starburst powers the foundations of enterprise AI
The shift toward AI-driven decision-making is fundamentally about how we interact with information. It requires an architecture that is conversational, contextual, and highly performant.
With the Starburst MCP Server and NVIDIA Vera, we are building that foundation. In doing so, we are moving beyond the constraints of static reporting and providing the roadmap to translate AI ambition into production results.



