In the latest episode of Detection at Scale, we sat down with Vjaceslavs (Slava) Klimovs, a security leader at CoreWeave responsible for threat modeling, detection, prevention, response, and compliance. With 13 years at Google working on infrastructure security, followed by 18 months at Snapchat and now at CoreWeave, Slava brings a hard-earned perspective on bootstrapping security programs in high-growth environments.
Our conversation explores his perspective that 40-50% of security work isn’t tied to concrete threat models, why detection observability should precede prevention controls in fast-moving environments, and how AI agents will make previously tolerable security gaps catastrophically exploitable.
Slava’s zero-to-one journey at CoreWeave reveals how security leaders must prioritize when resources are constrained, and the business is moving at breakneck speed. His framework for threat-model-driven work, his mandate that the new detection platform be “AI-first from the get-go,” and his work on host integrity from firmware through userspace offer concrete examples for practitioners building similar programs. This conversation cuts through abstract security principles to focus on implementation: what to build first, how to justify security investments through threat models, and why the age of AI agents fundamentally changes the calculus on security debt.
Topics Covered
Building Security from Zero to One: Slava’s experience joining CoreWeave and the process of bootstrapping a security program at a hyper-growth AI infrastructure company.
Observability vs. Prevention: Why establishing deep security observability and forensic capabilities is often less intrusive and more critical than rolling out heavy-handed prevention controls early on in a fast-moving environment.
The “Threat Model” Problem: Slava’s hot take that 40-50% of security work is not done in relation to a concrete threat model, often driven by a culture of chasing “flashy” projects over solving complex, unglamorous problems.
Host Integrity at Scale: How CoreWeave verifies software provenance and integrity from the firmware level up to the user space, treating the boot process as a single verifiable model.
AI Agents & Technical Debt: How the introduction of AI agents into the enterprise will make historical technical debt (like over-provisioned access or exportable bearer tokens) unforgivable and immediately risky.
LLMs for Engineering Rigor: Using LLMs to strip the “fluff” from engineering design docs to force engineers to expose their true human intuition and local context, rather than just generating boilerplate content.
The AIUC-1 Standard: An overview of Slava’s contribution to the AIUC-1 standard for AI agent insurance, focusing on determining if an agent’s software provenance and environment make it “insurable”.
The Evolution of the SOC: The shift toward “AI-first” detection platforms and why the role of the traditional analyst is evolving into end-to-end detection engineering, where manual log analysis is replaced by engineering reliable detection code.
The transformation Slava describes aligns with Panther’s AI-powered capabilities–automatically handling the initial analysis that Slava emphasized as critical for reducing investigation time, while maintaining the human-in-the-loop validation. By automating the pattern matching and correlation that LLMs excel at, security teams can focus on the threat modeling and strategic security decisions that require human expertise. Learn more about Panther AI and how we’re building the AI-first SIEM for the modern SOC.















