Welcome to Detection at Scale, a weekly newsletter for scaling and sustaining security operations teams. We focus on effectively utilizing AI agents in the SOC with the best practices on context, prompts, and tools like MCP. If you enjoy reading Detection at Scale, please share it with your friends!
Every SOC analyst knows the frustration of data gathering during their rotation: a suspicious login alert fires, but the investigation becomes a scavenger hunt across multiple log sources. Is this user typically remote? Has this IP been flagged before in an incident? Are there related alerts from the same timeframe? Every minute that goes by could mean potential escalation or another false positive, burning valuable analyst time.
AI SOC analysts promise to solve this problem by automating the tedious work of alert triage and investigation, but AI agents can only be as good as the context they are provided. Feed them isolated alerts without the proper background context, and you'll get shallow analysis. Give them the best data at the right time, and they can reason through complex security scenarios with the depth of your best analysts. The question is: What data does it take for AI agents to thrive in the SOC?
The answer lies in "context engineering"—the art and science of providing AI systems with the right source and depth of information needed to solve complex problems. In security operations, this means going beyond simple alert forwarding to building rich, contextual intelligence that helps AI agents understand what happened, why it matters, who was involved, and how it fits into your organization's unique threat model. Effective AI-driven security operations require four critical layers of contextual data across alerts, identity, asset, and enrichments, helping AI agents understand individual security events and how these events fit into your organization's broader risk landscape.
This blog post will explore the data layers that bring AI from a basic alert processor into an intelligent security analyst. We'll examine how historical alert patterns provide crucial learning opportunities, why identity and asset context separate real threats from false positives, and how enrichment data helps AI agents make the nuanced decisions that effective security operations demand.
Most security teams are already collecting this data, so let's integrate it for AI-driven security operations that work to our advantage.
The Context Challenge
As teams introduce AI SOC analysts to automate triage and investigation workflows, the question becomes: how do we ensure these agents have access to the rich contextual intelligence that makes them as informed as their human counterparts?
Consider how your analysts approach a suspicious login alert. They don't just look at the raw events; they start building context by checking the user's recent activity patterns, cross-referencing the source IP against threat intelligence feeds, examining similar alerts from the same time window, and factoring in insider knowledge about ongoing projects. This contextual reasoning separates a 30-second false positive dismissal from a 30-minute investigation that goes nowhere.
Agent failures are context failures, not model failures, though they can also result from poor tooling integration, limited data access, or insufficient breadth in external connections. When AI-powered security tools produce shallow analysis, miss obvious patterns, or generate recommendations that feel disconnected from your environment, the underlying AI model isn't always the limiting factor. Modern large language models excel at complex reasoning when provided with comprehensive context and enough chain of thought. The challenge lies in gathering, structuring, and delivering that context effectively.
Alert and Signals History
The fastest way to triage an alert is to check if we have a record of it in historical patterns. As much as we strive for novelty in detection development, what typically ends up happening is continuous alerts from "the usual suspects" (looking at you, Jim from Marketing – kidding). But when an alert is genuinely unique, answering "have we seen this one before?" becomes crucial for determining whether it's malicious and needs established response procedures. This environmental learning—understanding whether patterns represent first-time occurrences versus recurring themes specific to your infrastructure—helps AI agents distinguish between suspicious geographic access and routine activity from your distributed remote workforce, or between genuine anomalies and normal business evolution.
Signals analysis takes this context check a layer deeper, where all alert events are proactively analyzed rather than just checking for identical alert matches. This can be particularly useful for examining indicator attributes across all alerts, such as checking if an IP address has appeared in other events, whether specific user attributes correlate with multiple alert types, or if attack techniques are used consistently across different timeframes. AI agents can easily take this history into context by checking for the same alert through 30-60-90 days, alerts from the same actor across detections, or all alerts around the same time period. It's not a perfect science (e.g., how far do we go back? how much data do we need to add into context?). However, these will typically yield more indicator clues to aid in additional data gathering and increase confidence.
🤖 "This suspicious login pattern has triggered 12 similar alerts over the past 6 months. Ten were false positives related to our mobile development team's remote testing environment, but two led to confirmed account compromises during our Q3 security incident."
Outcome and quality tracking becomes particularly valuable when analysts mark alerts as false positives, confirm genuine issues, or escalate to incident response, which creates crucial learning signals for future triage decisions. AI agents can begin to recognize the common indicators that separate benign anomalies from genuine security concerns, but only when they have access to this historical resolution data.
Identity Intelligence
While alert history provides the foundation for pattern recognition, identities provide organizational context about the human or non-human (service accounts) causing the alert. Understanding who is involved in an alert—their role, typical behaviors, access patterns, and organizational context—is often the difference between a real attack and an admin who ran an overly privileged command in production or someone from HR downloading many sensitive files from Google Drive.
User profiling enables AI agents to contextualize behaviors to understand whether unusual activity is genuinely suspicious or consistent with someone's job function. A DevOps engineer making widespread production changes during scheduled maintenance represents regular operational activity, while a marketing coordinator performing the same actions should trigger immediate investigation. Without this organizational context, AI agents default to taking all behaviors at face value, either overgeneralizing or missing legitimate issues.
🤖 "This user is a Senior Site Reliability Engineer based in our Seattle office. While the 2:47 AM login time is outside normal business hours, it correlates with a P1 incident escalation in our ticketing system and matches their historical pattern during infrastructure emergencies."
Team and organizational dynamics add another layer of contextual understanding that helps AI agents reason about lateral movement, privilege escalation, and insider threat scenarios. When multiple users from the same team exhibit similar behavioral changes simultaneously, this might indicate a targeted campaign or reflect organizational changes like new project assignments or initiatives.
AI agents need access to current organizational data that reflects these changes in real-time, rather than static user profiles that become outdated and lead to incorrect assessments.
Asset Intelligence: The Technical Context
Just as identity context helps AI agents understand who is involved in security events, asset intelligence provides crucial insight into what systems are accessed and their relative importance to business operations. This technical context transforms generic security alerts into risk-prioritized investigations, aligning with business impact.
Asset classification and business criticality enable AI agents to understand the difference between a suspicious login to a development sandbox and identical activity targeting user data in production databases. A brute-force attack against a decommissioned test server might represent low-priority cleanup work, while the same attack pattern against customer-facing payment systems demands immediate escalation. AI agents with proper asset context can automatically adjust investigation priority and escalation procedures based on the criticality of affected systems.
Infrastructure and deployment context help AI agents distinguish between legitimate cloud-native behaviors and potential security concerns. Auto-scaling events, serverless function executions, and container orchestration activities generate numerous security events that appear suspicious without proper infrastructure context. An AI agent that understands your Kubernetes deployment patterns can differentiate between regular pod lifecycle events and genuine lateral movement attempts while recognizing when cloud resource creation deviates from established automation patterns.
🤖 "This EC2 instance shows unusual outbound network connections to external IP addresses. However, the instance is tagged as 'ml-training-prod' and the connections align with our standard machine learning data pipeline that pulls from public datasets. The concerning factor is the timing—these connections typically occur during scheduled batch processing windows, but this activity is happening outside the defined maintenance schedule."
Vulnerability and patch context provide AI agents with essential risk assessment capabilities that help prioritize security events based on exploitability. A network scan targeting systems with known unpatched vulnerabilities represents a more urgent threat than identical activity against fully updated infrastructure. AI agents with access to vulnerability management data can correlate attack patterns with specific CVEs, helping security teams understand whether observed activity represents opportunistic scanning or targeted exploitation of known weaknesses.
Enrichment Intelligence: External Context That Matters
The final layer of contextual intelligence comes from external data sources that provide AI agents with broader threat landscape awareness. This enrichment context helps transform isolated security events into comprehensive threat assessments incorporating global intelligence and external indicators.
IP and domain reputation give AI agents the external perspective to assess whether network connections represent legitimate business activity or potential threats. Geographic location data, hosting provider information, and reputation scores help AI agents understand the difference between routine CDN connections and suspicious command-and-control communications. However, adequate enrichment goes beyond simple reputation scores, including contextual factors like recent registration dates, certificate anomalies, and infrastructure patterns that indicate potential threat actor infrastructure.
File and hash intelligence enables AI agents to quickly assess the risk level of unknown binaries, documents, and other artifacts discovered during investigations. Rather than treating every unknown file as equally suspicious, AI agents with access to comprehensive threat intelligence can prioritize investigation efforts based on known malware families, campaign attribution, and behavioral analysis from sandbox environments. This context is particularly valuable for prioritizing incident response efforts when multiple potential threats require simultaneous attention.
Campaign and technique correlation helps AI agents understand how individual security events fit into broader attack patterns and threat actor behaviors. When suspicious PowerShell execution correlates with techniques commonly used by specific threat groups, AI agents can provide analysts with relevant context about likely attack progression, typical dwell time, and effective containment strategies. This strategic context transforms reactive alert response into proactive threat hunting based on anticipated attacker behaviors.
Making Context Work in Practice
The four layers of contextual intelligence, historical alert patterns, identity awareness, asset classification, and external enrichment, enhance AI agents from basic alert processors into more sophisticated analysts. However, the real value emerges from the interconnections between these data layers and how they inform each other during investigations.
Consider a practical example: an AI agent receives an alert about unusual database queries from a service account. Historical context shows this is the first time this particular query pattern has been observed. Identity intelligence reveals the service account is associated with a financial reporting application that typically runs predictable batch processes. Asset context indicates the target database contains customer payment information, a high-value target requiring immediate attention. Enrichment data shows the queries originated from an IP address recently flagged in threat intelligence feeds associated with a financially motivated threat actor group.
Each context layer provides valuable information, but the combination creates a comprehensive assessment that enables rapid and informed decision-making. The AI agent can immediately escalate this incident as a likely attack against high-value financial data, providing analysts with the context needed for effective response rather than generic "unusual database activity" alerts.
The key to successful AI-driven security operations is ensuring your AI agents have structured, current, and immediately accessible data when security events unfold. This requires thoughtful integration and onboarding of the logs and integrations that can provide these angles of intelligence.
Modern security operations teams that get this right find their AI agents becoming genuine force multipliers, handling routine triage with human-level contextual awareness while freeing analysts to focus on complex investigations and strategic security initiatives. Investing in proper context engineering pays dividends through faster incident response, more accurate threat prioritization, and security operations that scale with business growth rather than becoming bottlenecks.
During the day, I’m the Founder @ Panther Labs, building intelligent AI agents and infrastructure to automate and accelerate triage and investigation times for security teams while improving accuracy and quality. If you want to learn more about how Panther incorporates these layers of intelligence into its AI SOC analyst agent, check out our homepage demo or request a demo! Panther is trusted by leading security teams like Coinbase, Asana, Discord, and more.
Related Reading