Skip to main content
Webinar
Thu, Feb 12, 5:00 PM - 5:30 PM (UTC)

AI Security: From a Threat Researcher’s Perspective

About this event

AI Security: From a Threat Researcher’s Perspective Webinars AI Security: From a Threat Researcher’s Perspective About the session AI security has become a catchall term - models, prompts, data, infrastructure, agents - but the real risk driving every AI incident comes down to one thing: access. 

Today’s AI agents don’t just generate text. They run workflows, execute commands, make API calls, and operate inside your environment using real credentials. And here’s the uncomfortable truth: most organizations cannot answer the three questions that matter most: What can your agents' access? What can they do? And who’s governing any of it? 

In this session, a BeyondTrust Phantom Labs™ threat researcher breaks down why agentic AI isn’t a brand-new security domain at all - it’s an accelerant poured onto a longstanding problem: identity sprawl and uncontrolled access. 

Drawing on active research and real-world testing, we’ll explore: 

How agentic AI explodes the attack surface by adding autonomous actors to already fragile identity systems 

Why most AI deployments inherit access by default—with no visibility, boundaries, or safety checks 

Why “best practices” for agent security don’t exist yet, and what’s realistically enforceable right now 

You’ll walk away with a clear, practical framework for assessing agentic AI risk, why identity must become the control plane, and how to apply emerging guidance - even while the market plays catchup. 

 Register Now  Register Today!

Event details
Online event
Thu, Feb 12, 5:00 PM - 5:30 PM (UTC)