Your Security Tools Were Built for People. Agents Are Not People.
Cisco spent the last 18 months acquiring its way into AI security. The April 2026 analyst briefing, led by Akshay Bhargava, VP of Product Management for AI Software and Platform, and Lars Urbaniak, Director of Product Management for Duo Security, was the first time the full picture came together coherently. Cisco is now presenting a credible end-to-end story for a question most enterprises have not yet considered: What happens when the entity accessing your systems is not a person?
The Agent Threat
Agents are non-deterministic. A traditional application does the same thing every time you run it. An agent does not. That variability is what makes agents useful. It is also what makes them hard to govern.
Agents also do not stay contained. A developer builds one to handle a task and connects it to email, a file system, a database, and a few external services. The agent gets the job done. Nobody revisits its access. Six months later it is still running, still connected, and the security team has no record of it.
This is not a hypothetical scenario. A large multinational semiconductor manufacturer found four unknown agents running in Amazon Web Services and 33 AI models it had not logged, all surfaced when it first turned on Cisco AI Defense. The company did not know it had agents running in its own cloud environment. This situation is becoming commonplace as organizations accelerate the deployment of agents.
Cisco’s State of AI Security 2026 report states that 85% of enterprises are experimenting with agents, but only around 5% have moved them into production. The constraint is around governance, not technology. Security teams will not approve what they cannot control.
Predeployment Testing Is Not Sufficient Protection
Cisco’s research on eight widely used open-weight AI models found that single-session attacks were blocked reasonably well but multi-session attacks were not. When adversaries probed the same guardrail from different angles across a longer conversation, they broke through more than 90% of the time. Agents operate across long sessions by design, which means multiple sessions are the relevant attack pattern.
What Cisco Shipped
Several products announced at RSA 2026 were covered in detail at the briefing.
DefenseClaw is a secure framework for OpenClaw deployments, sitting on top of NVIDIA OpenShell and compatible with any cloud environment. It reached nearly 500 GitHub stars in its first three weeks. Cisco is using open-source traction as a developer acquisition strategy, and the early signal is credible.
AI Defense Explorer Edition is a self-serve, free-tier product built on AI Defense’s multi-agent red-teaming engine. It targets AI engineers, developers, and application security teams who need to red-team agents without going through an enterprise procurement cycle. The path from self-serve to enterprise is intentional: Cisco is using Explorer Edition to seed pipeline.
The LLM Security Leaderboard is a publicly available dashboard that ranks models by safety rather than capability. The question it answers is not which model is fastest, but which one is safe enough to deploy. It is built on Cisco’s own security taxonomy and cross-referenced against the Open Web Application Security Project, MITRE, and NIST frameworks.
The open-source toolkit now includes a skills scanner, an agent-to-agent protocol scanner, an MCP scanner, and an AI bill of materials tool for inventorying framework components. Algorithmic red-teaming runs extended attack sequences against agents before and after deployment, continuously and across multiple languages, simulating a persistent adversary rather than a one-time probe.
Zero Trust for Agents Is the Missing Governance Layer
Cisco’s Agentic IAM product, built on Cisco’s Zero Trust Access Platform, applies the zero-trust model to AI agents. The framing maps directly to what CIOs already understand: Know every agent, authorize every action, and adapt to risk in real time, enforced consistently at the access boundary. Every agent gets registered, assigned a human owner, and scoped to a specific task. Access is granted just in time and revoked when the task ends. Agents do not retire or resign. Without active lifecycle management, they accumulate. The model is correct. Getting organizations to execute lifecycle management at scale is the harder problem.
Our Take
The multi-session breach rate is over 90%. That number is immediately actionable and specific enough to tell you where the problem lives. Persistent adversaries do not give up after one blocked attempt. They come back across sessions. A guardrail that holds for a single exchange is not the same thing as a guardrail that holds across ten. Predeployment review is where a governance program starts, not where it finishes.
The customer evidence Cisco cites is harder to dismiss than a pilot reference: A semiconductor manufacturer running agents in its own cloud it could not account for. A healthcare company whose governance review could not keep pace with production demand, with Cisco’s automated testing set up and running in under 24 hours. These are operational constraints, not proofs of concept.
Cisco now covers the full agent security lifecycle: Find the agents, test them before deployment, watch them at runtime, control their identity and access, and give developers tools to build securely from the start. Whether that holds up against Palo Alto Networks, CrowdStrike, and the point-solution vendors is worth testing directly. Ask each one where their coverage stops.
Start with an inventory of agents. You cannot secure agents you have not found. Once you know what is running, decide how far to go. A full multi-layer deployment is the right answer eventually. For most security teams right now, locking down the highest-risk connections first is the more realistic starting point. Either is defensible. Doing nothing is not.