In this episode of Detection at Scale, Steven Gubenia, Head of Detection Engineering for Threat Response at Cisco Meraki, shares his practical framework for implementing AI agents in security operations. With deep experience from one-man security teams to leading detection engineering at scale, Steven brings a refreshingly pragmatic perspective on how organizations can thoughtfully integrate AI into their security workflows. Steven now leads initiatives that bridge traditional SOAR capabilities with modern agentic workflows, emphasizing that AI enhancement requires solid foundational processes to avoid the "garbage in, garbage out" trap.
The conversation centers around Steven's "crawl, walk, run" methodology for implementing AI in security operations, moving from simple data enhancement to autonomous decision-making with appropriate human oversight. He discusses the evolution of human-in-the-loop strategies, explaining how teams can build trust in AI agents over time while maintaining proper audit trails and governance. The discussion explores practical implementation details around enrichment agents, triage automation, and containment workflows, highlighting the importance of scoped permissions and security considerations when deploying AI agents with real operational impact.
Steven also addresses the organizational side of AI adoption, emphasizing the critical need for top-down buy-in, comprehensive training programs, and messaging that focuses on individual productivity benefits rather than cost-cutting narratives. Throughout the discussion, Steven reinforces that while AI won't replace security professionals, those who learn to use AI effectively will significantly out-compete those who don't.
Key Takeaways
The Crawl, Walk, Run Framework Works: Steven's three-phase approach provides a practical roadmap—start with data enrichment agents, progress to reasoning models (triage), then move to action-taking agents (containment). This graduated approach builds organizational trust while delivering immediate productivity gains.
Human-in-the-Loop: Rather than reviewing every agent decision forever, successful implementations start with intensive human oversight, gradually shifting to audit-based review as confidence builds.
Detection Engineering Becomes Mission-Critical: As AI enables more granular, environment-specific detection logic, detection engineering skills become more valuable, not less. Organizations will shift from generic rule sets to highly customized detection logic tailored to their threat landscape and infrastructure.
Organizational Change Requires Individual Value Proposition: Top-down AI mandates fail without proper training and clear individual benefits. Successful adoption focuses on how AI eliminates tedious work and enables analysts to focus on high-value activities that advance their careers.
Security Considerations Are Engineering Problems: Concerns about AI agent security, MCP server trust, and permission scoping are solvable through proper engineering practices, vendor management processes, and incremental deployment strategies rather than barriers to adoption.
The Productivity Multiplier Reality: Steven's prediction that AI-proficient security professionals will out-compete their peers isn't hyperbole—it's already happening. Entry-level positions are evolving, but professionals who master AI augmentation will have significant competitive advantages in the job market.