-min.webp)
Consider the state of your Tier 1 analyst at 2:00 PM on a Tuesday. They have been on shift for six hours. The queue is moving, yet the backlog is holding steady. An alert for "Suspicious PowerShell Execution" lands in the triage bucket.
The analyst knows the drill. They open the EDR console to grab the command line arguments. They pivot to the directory service to check the user’s group membership. They open a threat intelligence portal to check the hash. They check the CMDB to see if this is a developer workstation where this behavior might be benign.
This takes fifteen minutes. The work is not intellectually difficult. It is a mechanical data fetch and assembly.
The friction lies in the context switching.
Every new browser tab is a micro-interruption. By the time the analyst gathers the evidence required to make a decision, they have expended more energy on the process than on the judgment.
If the decision is a false positive, that fifteen minutes feels like wasted time. If it is a true positive, the analyst is already fatigued before the remediation begins.
This is the structural flaw in the modern SOC. We have solved the problem of collecting logs. We have solved the problem of detecting signals. We have not solved the problem of investigating those signals at scale without burning out human talent.
{{ebook-cta}}
Current security alert handling, relying on in-house staffing, MDR, and SOAR, is fundamentally flawed.
In-house staffing offers context but is costly and difficult to scale. Managed Detection and Response (MDR) provides 24/7 scale but lacks internal context, punting complex alerts back to the in-house team. Security Orchestration, Automation, and Response (SOAR) automates rote tasks but its complex, brittle playbooks require constant maintenance, shifting focus from threat hunting to engineering.
The industry is shifting toward a hybrid model that integrates generative AI agents into the workflow. This is distinct from the previous models because it changes the division of labor.
In the old models, humans were responsible for both rote execution and judgment. The human had to query the data (execution) and then decide if it was malicious (judgment). SOAR attempted to automate execution but couldn’t live up to its promise.
The hybrid human AI model assigns the execution of the investigation to the AI. The AI acts as a virtual SOC analyst. It observes the alert, understands the required investigation steps, queries the necessary tools, and synthesizes the findings into a cohesive narrative.
The human retains ownership of the judgment. The analyst reviews the investigation package prepared by the AI, validates the findings, and makes the risk decision.
This is a fundamental shift because it decouples the volume of alerts from the volume of human effort. An AI agent can investigate one hundred alerts in parallel with the same consistency as it investigates one. It provides the scale of MDR with the context-aware execution of an in-house analyst, without the brittleness of SOAR playbooks.
To understand the impact, we must look at the specific steps of an alert lifecycle. Let us look at a typical "Impossible Travel" alert.
For high confidence determinations, the alert can be autoclosed and that workflow effectively ends here.
Where needed, a human-in-the-loop configuration will require human judgement.
The primary change is that the analyst starts at the "Review" stage rather than the "Gathering" stage. This eliminates the fifteen minutes of tab switching. The consistency improves because the AI never gets tired and never skips a check because it "assumes" it knows the answer.
When you remove the mechanical burden of data gathering, the responsibilities of your team members evolve, empowering you to redirect their focus toward higher-value activities.
Your focus shifts from capacity planning to capability planning. You spend less time calculating how many analysts you need to hire to cover the next shift rotation. You spend more time analyzing the quality of the investigations and the coverage of your detection logic. You become the architect of the system rather than the manager of the queue.
The manager stops acting as the "ticket police." Instead of harrying analysts to close tickets faster, the manager focuses on quality assurance. They review the decisions made by the analysts and the performance of the AI agents. They are responsible for tuning the thresholds where the AI is allowed to auto-close alerts versus where human review is mandatory.
The senior analyst becomes the "Teacher." They handle the complex, novel threats that the AI has not seen before. When the AI fails to reach a high-confidence conclusion, the senior analyst steps in. Crucially, they analyze why the AI struggled and provide feedback to the engineering team to improve the system.
This role undergoes the most significant change. Historically, junior analysts were "grunts" who learned by doing repetitive triage. In the hybrid model, the junior analyst acts as a "Reviewer." They read the investigations produced by the AI. This accelerates their learning curve. Instead of spending their day learning how to query a log file, they spend their day reading complete investigation narratives. They learn to identify malicious patterns faster because they are exposed to the logic of the investigation rather than the mechanics of the query.
The detection engineer moves closer to the investigation process. In addition to writing the rules that trigger the alert, they also define the data sources the AI needs to investigate that alert. More interesting is the feedback loop the AI can generate with the detection engineer, providing insights into the effectiveness of the detections and its gaps.
The hybrid model is powerful, but it is not magic. It excels in specific domains and struggles in others. Below are common scenarios that an AI might struggle with and likely require heavier human involvement.
The common denominator for failure is a lack of digital context. If the clue resides in a conversation at the water cooler, the AI will miss it.
The SOC of the near future will not be a room full of people staring at dashboards waiting for a bell to ring. That model is breaking under the weight of data volume.
In two to three years, the Tier 1 analyst role as we know it—the human router of tickets—will largely disappear. It will be replaced by the Investigation Reviewer and the Threat Hunter. The volume of alerts will no longer dictate the size of the team. Instead, the complexity of the threat landscape will dictate the skill level of the team.
The hybrid model allows us to return to the original mission of the SOC: finding and stopping adversaries. We lost that mission somewhere along the way, buried under a mountain of false positives and administrative overhead. This shift is our best chance to dig ourselves out.
Learn about Gartner's guidance on improving SOC capacity with AI agents

