As organizations move beyond simple chatbots and deploy autonomous AI agents, the security ground is shifting fast. The defenses many teams built around prompt injection and jailbreaks are no longer enough. The real risk now sits outside the model—inside the agency itself.
When an AI starts browsing the web, running code, accessing internal systems, and coordinating with other agents, the model stops being the primary concern. Risk accumulates in the handoffs: tools, permissions, data sources, and decisions interacting at machine speed.
In this deep-dive session, Rahul Parwani, Head of Product for AI Security, explains why traditional LLM red teaming breaks down in an agent-driven world. The discussion moves past input-output testing and focuses on the systemic weaknesses that emerge when AI systems can act on their own.
What you’ll learn
AI systems can now take action—and attackers can exploit that capability. As autonomy grows, security must expand beyond the model to the full system around it. This session examines where innovation meets exposure, and how to secure AI in real-world environments.