Building Custom AI SOC Agents with MCP
How security teams are orchestrating vendor capabilities with internal tooling through conversational bots, workflow automation, and enhanced developer tools
Over the past year, we’ve seen an explosion of AI capabilities built directly into security products: intelligent triage assistants, automated investigation tools, and AI-powered rule generation. These vendor-built capabilities deliver significant value by bringing sophisticated AI directly to security practitioners without requiring teams to become experts in agents or machine learning. But technical security teams have begun augmenting with custom agents that act as connective tissue between vendor-built AI capabilities, internal services, and organization-specific context to accomplish bespoke workflows.
MCP continues to play an essential role for technical teams that want to orchestrate multiple capabilities rather than use them in isolation. Consider a typical alert triage scenario: your agent needs to check your internal runbook database, query your SIEM for related activity, pull employee context from your HR system, and correlate findings from your threat intelligence platform, all while maintaining conversation context in your team’s Slack channel. This is where MCP’s standardized approach to tool integration creates leverage, allowing teams to combine pre-built agents from various vendors with their own internal tooling and custom MCP servers without writing extensive integration code for each connection.
The pattern emerging from organizations implementing custom security agents breaks down into three distinct approaches: conversational interfaces embedded in communication platforms like Slack, orchestrated multi-agent workflows coordinated through tools like n8n, and enhanced developer environments that bring security context directly into coding assistants like Claude Code or Cursor.
In this post, we’ll explore how these custom agents work, what makes them effective, and the practical challenges teams are encountering as they build this connective tissue between vendor capabilities and internal operations.
Hear from Practitioners About Real-World Implementations!
This Thursday, I’m hosting a webinar featuring security practitioners from companies like OpenTable and Block who run MCP in production settings. They’ll walk through their implementations, share what’s working, and lessons from workflows they’ve automated. If you’re building custom agents, this conversation will save you weeks of trial-and-error.
Register for “How MCP Helps Security Teams Move Faster”, Thursday, Nov 20th, 2025!
The Custom Agent Opportunity
Vendor-built AI capabilities excel at addressing common security workflows that span many organizations, like analyzing alerts and synthesizing investigation findings. These capabilities become even more powerful when connected to organization-specific context and internal systems that vendors can’t access or don’t know exist. Your internal runbook database, your custom asset inventory system, your team’s historical incident data, or your specific escalation logic represents the connective tissue that transforms generic AI capabilities into precisely tuned automation for your environment.
Consider a typical alert triage scenario. A vendor-provided triage agent can analyze the technical indicators of an alert and assess its severity using general threat intelligence. But it can’t (always) check whether this user recently submitted an IT ticket about unusual activity, whether this asset is scheduled for decommissioning next week, whether similar patterns triggered false positives in your environment last month, or whether this matches the specific attack scenarios your threat modeling identified as high-risk for your industry. Custom agents fill these gaps by orchestrating multiple capabilities together and combining vendor-built analysis with internal context gathering and organization-specific decision logic.
MCP’s standardized approach to tool integration is what makes this orchestration practical for technical security teams. Rather than writing custom integration code for each vendor API and maintaining separate authentication mechanisms for every tool, teams can expose capabilities through MCP servers and let agents discover and use them through a common protocol. This standardization reduces the integration burden from an N×M problem (where every agent needs custom code for every tool) to a single build and reuse across different agents and workflows. The result is connective tissue that’s maintainable by security engineers rather than requiring dedicated integration development teams.
Common Patterns of Custom Agents
Custom agents manifest in multiple distinct forms, each optimized for different security workflows and team interaction patterns. Understanding which pattern fits your use case determines both implementation complexity and operational effectiveness.
Pattern 1: Conversational Interfaces (Slackbots and Chatbots)
Conversational agents embedded in communication platforms like Slack or Microsoft Teams bring AI capabilities directly into the flow of security operations. These agents respond to natural language queries in channels or threads, maintaining conversation context while orchestrating multiple MCP servers to gather information and execute actions. The interface is familiar. Just @ mention the bot and ask a question.
The power of conversational agents lies in their ability to meet analysts where they already work. During alert triage, an analyst can ask “what’s the context around this AWS console login from Romania?” and the agent orchestrates multiple lookups: checking if the user has traveled recently (HR system MCP server), reviewing their normal login patterns (SIEM MCP server), examining recent access modifications (IAM MCP server), and correlating with threat intelligence (vendor MCP server). All of this happens in a Slack thread where the entire team can see the investigation, ask follow-up questions, and collaborate on the response. The agent becomes a force multiplier that eliminates context switching while maintaining natural team communication patterns.
Implementation typically involves connecting a Slackbot application server (for sending and receiving messages), your SIEM MCP server (for querying security data), and any custom MCP servers you build to expose internal systems. The agent uses an LLM to understand queries and determine which tools to invoke, then formats results back into conversational responses. Thread context provides automatic conversation history, and the human-in-the-loop design ensures analysts maintain control over decisions while the agent handles the mechanical work of gathering context and synthesizing findings.
Pattern 2: Orchestrated Workflows (n8n and Low-Code Platforms)
Multi-agent workflows coordinated through platforms like n8n represent a different approach: rather than conversational interaction, these implementations encode complex security processes as visual workflows that orchestrate multiple specialized agents together. Each node in the workflow might represent a different agent capability: one for enrichment, another for severity scoring, and a third for determining escalation paths. The workflow tool provides the orchestration logic, error handling, and state management between them.
This pattern excels at automating repetitive, multi-step processes that must occur consistently and reliably. When a detection fires, an orchestrated workflow can immediately trigger an enrichment agent that gathers context from multiple sources, pass those findings to a specialized analysis agent that assesses severity and identifies similar historical incidents, then route to either automated remediation (for known-safe scenarios) or analyst review (for ambiguous cases) based on explicit business logic. The visual workflow makes this automation auditable and maintainable, and security engineers can see exactly what happens at each step, modify the logic without touching code, and add new agents or tools as capabilities evolve.
The n8n ecosystem has embraced MCP through community-built nodes that allow workflows to connect to any MCP server as a tool. This means a single workflow can orchestrate agents powered by different LLM providers, call multiple vendor MCP servers for different data sources, and invoke custom MCP servers for internal systems, all through a standardized interface. The workflow platform handles the complexity of chaining these calls together, managing failures and retries, and providing observability into how the automation executes. Teams can start with simple linear workflows (gather context → analyze → notify) and gradually add sophistication (parallel enrichment, conditional branching, human approval steps) as they learn what works for their environment.
Pattern 3: Enhanced Developer Tools (Claude Code, Cursor)
The third pattern integrates MCP directly into AI-powered coding assistants, bringing security context and capabilities into the detection development workflow. Tools like Claude Code or Cursor can connect to MCP servers that expose your SIEM APIs, allowing the AI assistant to query your actual security data, understand your log schemas, and test detection logic without leaving the development environment. This tight integration between AI assistance and security infrastructure accelerates the entire detection engineering lifecycle.
Consider the typical detection development process: an engineer focuses on a specific threat model, researches how it manifests in logs, writes detection logic in the SIEM’s query language, deploys it to a test environment, runs it against historical data, tunes for false positives, and iterates through multiple rounds of refinement. With MCP-enhanced coding assistants, this process compresses significantly. The engineer can describe the threat scenario in natural language, and the assistant queries the SIEM MCP server to retrieve sample logs showing how this activity appears in the environment, generates detection logic tailored to the actual log schema, and validates the syntax before deploying anything to production.
This pattern particularly benefits teams practicing detection-as-code, where rules are developed in version-controlled repositories rather than directly in SIEM interfaces. The coding assistant understands both the security context (what you’re trying to detect) and the technical implementation details (your data structure, query syntax, deployment process) by combining its training with real-time access to your security infrastructure through MCP. The result is faster iteration, fewer bugs, and detection rules that account for your environment’s specific characteristics rather than being adapted from generic examples.
During the day, I’m founder & CTO at Panther, where we’re building an AI SOC platform with an open-source MCP server that provides all the benefits described above! Whether you’re orchestrating agents in n8n, building detections in Claude Code, or using our AI copilot, the same MCP interface gives you programmatic access to your security operations. We open-sourced the server because this connective tissue layer should be transparent for teams building custom automation. Check it out on GitHub!
What’s Working (and What’s Still Rough)
Early adopters report significant time savings on routine tasks. Context gathering that previously required 15 minutes of jumping between dashboards now completes in under two minutes through conversational agents. Alert enrichment that analysts manually applied to every suspicious event now runs automatically via orchestrated workflows. Detection engineers who spent hours researching log schemas and testing queries now iterate faster with AI assistants that understand their specific data structure. But the real value might be institutional knowledge capture. Senior analyst investigation techniques encoded into agent prompts and tool design become reproducible workflows that junior analysts can leverage immediately, and runbook knowledge transforms from documentation into executable logic that gets tested every time an agent uses it.
The most critical insight from successful implementations is that agent effectiveness depends less on model sophistication and more on thoughtful tool design. Narrow, composable tools consistently outperform kitchen-sink capabilities. Rather than building a single “query_siem” tool that accepts arbitrary queries, practical implementations create focused tools like “get_user_login_history” and “check_asset_vulnerabilities” that each do one thing well and compose naturally together. Tools that return analysis-ready context rather than raw log lines dramatically improve agent performance, and production implementations start with read-only operations before carefully expanding to write capabilities with explicit approval steps and comprehensive audit logging.
The rough edges are real and require honest acknowledgment. Agents with access to 20+ tools sometimes struggle with tool selection; context window limits can become binding constraints during lengthy investigations; and error handling remains challenging when agents misinterpret data or make incorrect assumptions. Human review stays critical for high-stakes decisions. Security considerations around credential management, audit logging, and supply chain concerns for community-built MCP servers require the same disciplined approach you’d apply to any privileged access. These aren’t showstoppers, but they require thoughtful implementation rather than naive deployment.
Moving Forward
Custom AI agents represent a fundamental shift in how technical security teams approach automation, moving from consuming standalone AI features to building connective tissue that orchestrates multiple capabilities together. MCP’s standardized approach to tool integration makes this accessible to security engineers rather than requiring dedicated development teams, and the three patterns we’ve explored (conversational, orchestrated, developer-focused) provide clear templates for different security workflows.
The teams building custom agents now are learning which tools pair well, how to design agents that scale human judgment, and which operational patterns work at scale. This experimentation is valuable precisely because security operations are inherently organization-specific. Your tools, your infrastructure, and your team structure are all unique, and generic AI features can only take you so far.
If you’re considering building custom agents for your security operations, learn from teams who are already running these implementations in production. Our webinar this week features practitioners who will walk through their architectures, share what surprised them, and demonstrate actual workflows they’ve automated. Moving forward, the security teams operating at scale won’t be the ones with the best individual tools, but the ones who orchestrated them most effectively.
Thanks for reading Detection at Scale. If you found this valuable, please share it with your colleagues who are exploring AI-powered automation in security operations!
Related Reading
Cover Photo by Shubham Dhage on Unsplash


