-min.webp)
There's a recurring pattern in technology markets where a genuinely transformative capability arrives, and the enthusiasm around it tips from reasonable optimism into something more dangerous: the assumption that the new thing eliminates the need for the old thing. We've watched this play out with UEBA replacing SIEM, SOAR replacing tier-1 analysts, and now we see the same thing about AI-SOC Analysts replacing the human security operations function entirely.
A truly mature approach to AI in security operations embraces human expertise as a critical, forward-looking control layer. While some may argue that the presence of a human-staffed security operations team is evidence of an immature AI product, this perspective fundamentally misunderstands the ideal division of labor in a modern SOC. The need for people is not a weakness; it is a strategic necessity that demonstrates a mature understanding of how real-world security operations function at the cutting edge.
To understand why human expertise remains irreplaceable, you first need to reject a mental model that still dominates how most people think about security operations. The characterization of the SOC primarily as an alert management function: a Level 1 analyst parses alerts, decides whether something needs action, escalates to Level 2, and the cycle repeats.
This pipeline model is precisely what automation and AI SOC tools are well-positioned to disrupt. Repetitive triage of known threat patterns, correlation of well-documented attack techniques, enrichment of alerts with contextual data — these are exactly the kinds of structured, repeatable workflows that machine intelligence handles efficiently. Nobody serious disputes that.
But the modern SOC incorporates threat intelligence as a core component of its operations and uses it to drive and guide the other components: triage and investigation, detection engineering, and automated incident response. It's precisely here that the human-AI question becomes really interesting.
{{ebook-cta}}
Things get messy when new threats arrive. They come with new characteristics, new behaviors, new implications. When these new threats emerge, who defines what the detection engines will look for? What questions should an AI-driven investigation ask? This is not a rhetorical question. It has a concrete answer: humans do. Specifically, highly skilled security operations professionals who understand both the threat landscape, threat actor behavior and methods, and the operational context of customer environments.
When I wrote the Gartner framework for detection engineering, it was about security teams using an iterative, agile-inspired process that cycles through activities to identify, prioritize, implement and manage security monitoring use cases, with security teams continuously identifying security monitoring requirements that drive the implementation of new or modified monitoring content. The process described there is not about the maintenance of static rule sets, but a continuous, intelligence-driven process of translating emerging threat knowledge into operational detection capability. At that time, Anton Chuvakin and I also created a SOC framework that calls this out explicitly as one of the four core processes of a modern security operations center, sitting alongside monitoring, incident response, and threat intelligence production.

The screenshot above is a great example to illustrate this dynamic. It's a recent example directly from a real world environment. When Microsoft Defender for Endpoint flagged the presence of an OpenClaw agent on an endpoint, Prophet's AI investigation didn't just process a generic alert. It asked a structured set of targeted questions: What AI tool activity is present on this endpoint? What network activity is associated with AI tools here? Are there known vulnerabilities in this tool's dependencies?
Those questions didn't emerge from a foundation model's training data. OpenClaw is recent enough that its security implications, the concerns around autonomous agent behavior, the credential exposure risks, the dependency chain vulnerabilities, are not yet encoded in the major commercial models. GPT, Gemini, Claude: none of them carry sufficient operational knowledge about this specific threat to drive a high-quality investigation autonomously.
So where did those investigation questions come from? Prophet's security operations team. Human analysts with backgrounds at companies such as Mandiant, Red Canary, Expel, Rapid7 — known to develop bleeding edge incident response expertise — identified the relevant concerns, structured the investigative logic, and encoded it into the platform. The AI then picked up those pieces and used them as part of a broad investigation plan, putting together what humans and machines do best. That's the architecture working as intended, and another reminder of why depth of investigation matters more than alert volume — not a workaround for an immature AI.
There's a broader principle operating here that deserves more attention than it typically receives in discussions about AI in security.
Foundation models are trained on historical data. Their knowledge of threats is, by definition, backward-looking. For well-documented attack patterns — MITRE ATT&CK techniques that have been written about extensively, malware families with years of analysis behind them — the models can be genuinely useful. But the threat landscape doesn't wait for training data to accumulate.
Threat-oriented use cases implemented to detect specific threats require finding activities related to specific sources and destinations or specific activities related to tactics, techniques and procedures. Many sources should be used, including industry reports and strategic threat intelligence resources, but the most operationally relevant source is always skilled human analysis.
An AI SOC provider without a strong human skillset component faces a genuine structural problem: it either has to wait for third-party threat intelligence to develop sufficiently to influence model behavior, or it has to rely on customers accepting slower response to emerging threats. Neither is a satisfying answer when the adversary isn't waiting. This is the same reality we explored in dispelling the hype around what AI SOC analysts can and can't do on their own.
Thinking from a competitive assessment point of view, it's easy to understand that the presence of a world-class security operations team isn't evidence that Prophet's AI is limited. It's evidence that Prophet understands what AI is actually good at and what it isn't, and that it has built a system that plays to the strengths of both. We've made a similar argument about the state of AI in the SOC more broadly: force multiplier, not replacement.
Over time, it has been clear that successful SOCs had put the premium on people rather than process and technology. People overshadow technology as a predictor for SOC success or failure. That observation hasn't been invalidated by AI. If anything, the arrival of capable AI investigation tools has made the quality of the human knowledge that shapes those tools more consequential, not less.
An important point to realize is that using a solution like Prophet AI makes not only the organization humans more effective, but it also brings the value of the work done by the professionals behind the technology along with it. The accumulated knowledge of our security analysts, their ability to look at a new threat, map its likely behaviors, identify the questions an investigator needs answered, and translate all of that into deterministic investigative logic is not something you replicate by scaling compute.
The competitors implying that human security operations teams represent AI immaturity are, perhaps unintentionally, revealing something about their own architecture: a bet that foundation models will eventually catch up to the threat landscape on their own, without structured human knowledge injection. The deeper question — whether AI will simply replace cybersecurity jobs — has a more nuanced answer than either camp wants to admit.
That's a bet worth watching carefully. The OpenClaw example is instructive precisely because it's timely. It's not a retrospective case study about a threat from 2019. It happened recently, the concerns are still developing, and the investigative knowledge required to handle it properly isn't sitting in any model's weights.
The organizations that will win in AI-powered security operations are those that understand the division of labor clearly. AI handles scale, consistency, and execution speed. Humans handle knowledge creation, threat interpretation, and the definition of what questions actually matter. Trying to replace the second category with more of the first is a category error. Category errors have consequences that show up in breach disclosures, not product reviews.
Our human team is not a concession to the AI's limitations. It's the reason why Prophet's AI SOC analyst works so well.
This guide breaks down how AI SOC agents work and how to build an agile security operation around agentic AI

