I’ve never seen technology move as fast as agentic AI. The term “unprecedented” gets thrown around a lot in tech, but this truly is unprecedented. Every couple of weeks, fundamental shifts force teams to rethink their approach, leaving core questions around architectural design, authorization, and security best practices in flux. It’s making it nearly impossible to build on solid ground. It’s as if we’re all just laying the track in front of a moving train.

Organizations are well aware of the risks in deploying AI: Box recently found that 74% of enterprises surveyed identified privacy and security as their primary concerns. At the same time, less than a quarter of respondents said they had well established governance frameworks in place. And even those organizations with strong governance will struggle to keep up with the changes in agentic AI.

In this article, we’ll explore the challenges organizations face as they balance the need to innovate and experiment with the security risks inherent to AI. Then we’ll discuss how partnering with a vendor that focuses exclusively on agentic AI governance can help you move quickly while keeping your enterprise—and your sensitive information—safe.

A Moving Target

Take the Model Context Protocol (MCP) as an example of how rapid change in agentic AI poses risks to enterprises. Open-sourced by Anthropic last November, MCP has quickly become the standard for how AI agents interact with external systems. But in the eight months since its release, there have already been several major changes to the specification.

It’s truly a moving target for enterprises developing AI infrastructure and agentic apps. Earlier this year, the SSE (server-sent events) transport mechanism was deprecated in the MCP spec in favor of Streamable HTTP. JSON-RPC batching, which was just added to the spec in March, was removed from the most recent spec version in June. Elicitation, a standard way for MCP servers to request additional information from users through the client, was just added to the spec.

For context, transport mechanisms, like Streamable HTTP and SSE, handle the underlying mechanics of how messages are sent/received by MCP servers, JSON-RPC batching allows multiple requests to be bundled together for efficiency, while elicitation enables more dynamic, conversational interactions between agents and users. These aren’t minor tweaks—they represent architectural changes that affect how developers build and deploy agentic systems.

The latest spec did include a guide for security best practices, as well as new requirements for MCP clients to implement resource indicators (essentially safety signals that help identify what data an agent is accessing). These security improvements are ultimately positive moves forward, but they underscore just how much things are still evolving.

Meanwhile, other protocols are emerging to compete with MCP, including Google’s Agent2Agent (A2A) Protocol and IBM’s Agent Communication Protocol. While these protocols aren’t currently taking the main stage, they may well take a more important role in the future. Organizations that standardize around one approach may soon find the landscape shifting beneath their feet.

Even Trusted Vendors Expose You to Agentic Risk

What makes this situation particularly challenging is that organizations face risk from unexpected corners. Familiar, trusted vendors are increasingly weaving agentic AI into their products. At first glance, this feels like a safe path to take on your agentic AI journey—surely these established companies have the resources and expertise to implement these technologies safely, right?

But these companies face the same struggles every organization faces in trying to understand and keep up with changes to agentic AI. And in the rush to incorporate agentic capabilities, your vendors may be exposing you to more risk than you (or they) know.

Take Docker’s recent release of the Docker MCP Catalog and Toolkit. This new feature lets you easily run more than a hundred different MCP servers. But here’s the problem: many of these MCP servers are just open-source projects built by independent developers, not Docker, nor the platforms themselves. With a couple of mouse clicks, you can spin up an MCP server, and possibly grant access or pass data to an unvetted destination. Docker may be a well-established, deeply trusted vendor, but even they can’t guarantee the security of every MCP server in their catalog.

Even first-party implementations can pose risks. Asana announced the launch of their official remote MCP server earlier this year. Just over a month later, Asana disclosed that a bug in their MCP server could have allowed users to access data from other organizations’ projects. For over a month, customer data could have been exposed to unauthorized users—not because of a sophisticated attack, but because of a fundamental implementation flaw in a feature that anyone could have easily mistaken as being enterprise-ready.

Inconsistencies in how MCP servers are being provided by vendors presents additional challenges. For example, companies like GitHub and Sentry provide both hosted (remote, over HTTP) MCP servers, as well as official open source versions that you can host (local, over stdio). Some only provide hosted (remote) MCP servers, like Asana and Atlassian. Meanwhile, Salesforce only provides open source versions that support local transport, useful for local integrations and command-line tools. This fragmentation can force you into building and maintaining a complex patchwork of disparate, hard-to-secure integrations.

The takeaway isn’t that these vendors are incompetent or malicious. It’s that even the largest and most trusted companies can’t keep up with the pace of change well enough to protect your enterprise from the risks.

Must Enterprises Choose Between Risk and Stagnation?

For organizations that want to reap the benefits of agentic AI and stay competitive, it seems like there are only two bad choices:

❌ Option 1: Risky Without Safeguards

Allowing employees to use MCP and agents freely may seem like the fastest path forward—but it comes with serious consequences. This approach might help you move fast, but the risks are real and can have serious financial, legal, and reputational consequences. For example, in May, agentic AI vendor Serviceaide disclosed that a data breach affected more than 483,000 patients of a New York hospital system.

Option 2: Slow and Overwhelming

Painstakingly building your own security framework can feel like the responsible choice—but it’s slow, complex, and resource-intensive. After the Asana breach, many thought leaders offered recommendations on how to safely integrate agentic AI into enterprise environments. Their advice included implementing strict access controls, conducting regular security audits, and establishing comprehensive monitoring systems.

This is sound advice, but the work required can easily overwhelm even the best-equipped IT teams. You’re not just implementing security once—you’re committing to constantly monitoring and adapting to changes in MCP specifications, new security vulnerabilities, and emerging protocols.

✅ Option 3: Scalable, Secure, and Centrally Governed

There’s actually a third option: trust agentic governance to a vendor who is fully dedicated to enabling enterprises to deploy agents safely—accelerating the secure adoption of agentic AI across the enterprise.

At Barndoor, agentic AI governance is our sole focus and core competency. We monitor every change to MCP specifications, evaluate emerging protocols, and continuously adapt our security controls so you don’t have to.

Our platform acts as a centralized governance layer that manages granular access, inspecting and authorizing every AI agent request before it reaches your systems. Whether you’re using first-party MCP servers from vendors like Atlassian or self-hosting open source servers, all traffic flows through Barndoor’s governance layer first.

As specifications change, we handle the complexity of adapting to new security requirements and authentication protocols. When new vulnerabilities are discovered, we deploy protections across our control plane, without requiring you to patch individual integrations.

The pace of change in agentic AI isn’t going to slow down. If anything, it’s accelerating as more organizations adopt these technologies and more vendors bring new and exciting (but experimental) capabilities to market.

The question isn’t whether you’ll adopt agentic AI—it’s whether you’ll do it safely. We’re here to make sure you can. Join our waitlist to learn how Barndoor can help your organization adopt agents safely.