The Australian Signals Directorate, CISA, NSA, the Canadian Centre for Cyber Security, NCSC-NZ and NCSC-UK recently released joint guidance on securing agentic AI systems, a sign that autonomous AI agents are now a national security priority and that the controls most organizations have in place today are not enough. Barndoor was built because we saw this gap coming – the guideline outlines the problems Barndoor is solving for: Over-privileged agents, identity gaps, MCP sprawl, weak human oversight, and unclear decision chains across agents.
First, let’s explore why the new guidance matters. The agencies treat agentic AI as systems with autonomous decision making, tools use, and memory. It calls out the shift to where agents are taking the wrong actions, outside human intent.
The authors state that organizations should “never grant [agentic AI] broad or unrestricted access, especially to sensitive data or critical systems,” and should “only use agentic AI for low-risk and non-sensitive tasks” until controls mature. The gap between what enterprises are deploying and what they can actually secure is wide. Barndoor research backs this up: roughly half of knowledge workers are already giving unsanctioned AI tools access to work systems.
What this means for enterprises
The guidance organizes risk into four categories: privilege, behavior, structural, and accountability. Each risk maps to what Barndoor is hearing from customer conversations and deployments.
Human identity isn’t enough
Most agents today inherit broad, human-level permissions carried over from IdPs. A single compromised tool, MCP server, or upstream agent inherits everything. For enterprises, this means least privilege has to be enforced per request, per agent, per action, with context that goes beyond traditional identity management.
Behavioral controls are needed for agents
Agents behave in ways their designers did not anticipate. They sometimes deceive evaluations. They chain tools together in sequences no human reviewed. For an enterprise, that means traditional quality assurance and playbooks – all of which assume deterministic software – are insufficient. You need runtime authorization and behavioral controls.
Accountability is an audit issue
According to the authors, “agentic system architecture can obscure what caused a particular action, making accountability hard to trace.” When an agentic workflow goes wrong, who is responsible? Whose logs do you read? In what order? Today, you often cannot tell. For highly regulated enterprises such as financial services, healthcare, and defense, these are critical audit and compliance gaps.
How Barndoor solves for these risks
Barndoor is the first agentic governance control plane built to govern AI access, policy, and visibility across the enterprise. Rather than retrofitting human-era IAM tools onto autonomous agents, we sit between every AI agent and every system or MCP-connected tool it tries to touch, inspecting and authorizing every request before it reaches your data or alters your systems.
Context-aware access, not broad human-level permissions. Barndoor controls what each agent can see and do based on the specific context of every request. This is the operational answer to the guidance’s repeated calls for least privilege. Agents get precise access rather than broad, inherited permissions that could turn catastrophic.
A policy enforcement layer for every agent action. Every AI agent request, across SaaS apps, internal systems, and MCP connected tools, is inspected and authorized by Barndoor before it executes. That is the “secure by default” stance the guidance calls for, applied at runtime.
An access control center built for multi-agent environments. IT and security admins manage all policies in one place: group multiple AI agents under a shared policy, test policies, and trace every allow or deny decision back to the specific rule that triggered it. Policies move through a clear lifecycle – draft, active, inactive, archived – so changes are reviewable. This directly addresses the guidance’s emphasis on governance, declarative policy, and accountability for autonomous systems.
Native MCP governance. Barndoor extends the same access controls, role-based policies, and real-time enforcement to MCP-connected tools enterprises use every day. The guidance highlights risks within third party tools, such as squatting, malicious tool descriptions, dynamic package loading – some of these risks can be minimized by a solution like Barndoor that enforces enterprise policy on every call.
Audit trails, and real-time visibility. Barndoor tracks every AI action, policy applied, and outcome produced across your systems. Audit trails give security teams a single record of what each agent did, on whose behalf, against which system, with which decision. Real-time alerts surface anomalies as they happen. This covers the operational backbone the guidance calls out under monitoring, auditing, and accountability.
Humans retain control. Centralized policy means security teams can expand agent autonomy progressively, roll it back when behavior degrades, and adapt rules as new threats emerge, without rewriting agent code or rebuilding integrations. This is the “graduated autonomy to incrementally increase agent independence whilst maintaining human oversight and understanding” the guidance recommends.
The bottom line
The guidance is a public acknowledgment that enterprises adopting agentic AI without access, policy, and visibility infrastructure are accepting a level of risk that national security agencies consider unacceptable for their own use.
Enterprises do not have to choose between agentic AI’s productivity gains and the security posture this guidance demands. They can have both, but only with a control plane designed for autonomous, MCP-connected, identity-bearing agents.
That is the gap Barndoor closes. If your organization is deploying AI agents, or planning to, we would welcome a conversation about how to align your AI security plans with this guidance.
Read the full report: Careful adoption of agentic AI services











