This week is a special episode of the Detection at Scale podcast. I’m usually the one asking the questions, but this time I’m in the guest chair, hosted by Julian Giuca, Panther’s Chief Product Officer. Our conversation covers the journey of building Panther’s AI SOC platform: From the evolution of the SOC from human-led to AI-enabled, the shifts of the last few years that took LLMs from “autocomplete-plus-plus” into genuinely useful agents, how security teams are actually adopting this technology in production, and what we’ve learned building these systems as the operating patterns keep evolving.
The podcast traces back to two early bets, the security data lake and detection-as-code, which we made in 2018 to solve the data scale problem before AI emerged as the next wave. Many years later, those choices turned out to be the exact foundation AI agents needed: detection logic they could read and modify, and a query layer to access huge amounts of helpful context. The work since then has been figuring out what changes when agents are the primary readers of detections, where teams actually want to land on the autonomy spectrum, and what it means for a system to close the loop rather than just close the alert.
One frame I keep coming back to from this conversation is that the risk of not adopting AI in the SOC is greater than the risk of an agent making a mistake. That has not been true for any prior generation of automation, and it changes the calculus for how aggressively teams should move. I hope you enjoy the conversation! Please leave a comment with your thoughts.
Topics Covered
Why detections written for humans fail agents: Most detections hand an analyst a step-by-step runbook — check this, then this, then make a call. When you give that structure to an agent, you’re wasting the technology. What works is describing the threat model, the evidence you’d want to see, and the judgment criteria a senior analyst would apply, then letting the agent reason from there.
Closing the alert vs. closing the loop: Closing an alert clears the queue. Closing the loop means the system gets smarter every time it runs. An agent without native access to your detection logic can triage, but it cannot improve the underlying detection that fired, which means alert volume stays flat or grows because nothing is actually learning.
The three-part inflection that moved AI from code autocomplete to agentic work: Reasoning models, tool calling, and MCP converging together is what made agents capable of doing real investigative work in the SOC — not any single capability in isolation, but the combination of all three arriving at roughly the same time.
Architectural prerequisites most teams don’t have yet: Python-based detections and a centralized cloud data warehouse weren’t built with agents in mind, but they created exactly the foundation agents need — detection logic they can read and modify, and a data layer they can query and federate out from via MCP into sources like BigQuery, Elastic, or Snowflake.
Agent autonomy tolerance as a function of workflow risk: There’s no single right threshold for how much independence to give an agent. High-stakes workflows need tighter guardrails; routine triage can run with more autonomy. Teams applying one blanket policy across everything are either underusing the technology or taking on more risk than they realize.
From 50% to 110% alert coverage: Going from monitoring half your alerts to exceeding full coverage — with that extra capacity running as proactive threat hunting agents around the clock — isn’t theoretical. It doesn’t mean fewer security people; it means you need people who know how to work with agents, prompt them well, and encode your team’s expertise into something that scales.
If any of these maps align with how you're thinking about your own SOC, the full conversation with Julian is worth your time. And if you want to see how to build this closed-loop architecture in practice, learn more about Panther and our AI capabilities.










