Rethinking SOC Capacity: How AI Changes the Human Cost Curve

Jon Hencinski
Jon Hencinski
December 12, 2025

For a modern CISO or SOC Director, security operations come down to one uncomfortable truth: the number one constraint in your SOC is analyst time.

If you have ever run a SOC, you have felt this in your bones. The people doing triage and investigations do not scale the way your cloud and SaaS environment does. Their time is finite. It is fixed. It is specialized. There is no built-in elasticity. Every new tool, every new log source, every new mandate from the business eventually runs into that wall.

At its core, this is a capacity modeling problem:

  • how many alerts show up
  • how long they take to handle
  • how many effective analyst hours you actually have

Once you do the math, the constraint becomes clear.

For a long time, the response has been pretty normal. When alert volume went up because you moved more to the cloud, added tech for visibility, or took on new scope through acquisition, you did what almost everyone does. You hired more people. You pushed all your data to the SIEM. You pushed harder on automation. You handed the overflow, and a lot of the problem, to MDR.

And for a while, that worked. Coverage went up. More alerts were touched. Response got faster. Outcomes improved.

But as your environment sprawled across cloud, SaaS, and endpoints, and you poured more budget into detection engineering to better protect the business, the model started to strain. The human cost curve began to bend the wrong way. Every extra bit of coverage demanded a bigger chunk of human time and money. The ability to keep handing more overflow and more customized alerting to MDR started to break down. Either it became too expensive, or the level of service you needed was not what you actually got.

None of that is a failure on your part. You were playing the game with the tools that existed.

AI simply gives us another path. Instead of trying to linearly scale humans to match the problem, you change the shape of the human cost curve entirely. AI handles most of the triage and investigation work. Human time gets pulled up stack and focused on the smaller set of decisions where judgment, context, and experience really matter.

If you want to close the gap between what should be investigated and what actually gets looked at, you eventually have to step back and ask a different question: how are we running this thing?

{{ebook-cta}}

In the rest of this post, I walk through three ways I see teams tackling SOC capacity today:

  • an in-house SOC
  • an outsourced model, shaped by contracts and budget
  • a Human plus AI hybrid, where AI SOC agents take on most of the work so humans do not have to

None of these approaches are wrong. They are all rational answers to the same constraint. The point here is not to judge, but to show how each model behaves in the real world, and how AI can offer a different way forward.

Model 1: In-house SOC – the human-only cost curve

In our in-house SOC, we have a fictional team of 10 SOC analysts. They are working out of a SIEM, a handful of “good enough” tools for pivoting, and some basic SOAR playbooks to clear out the worst of the toil and repetitive tasks.

On paper, it looks fine. In practice, the ceiling is obvious: you can only cover as much as your humans have time to look at.

Automation helps. Tuning and filtering keep alert volume in a “manageable” range. But the limit is still there, every day. Detection and Response Engineering asks:

  • Can we ship that new detection?
  • Can we expand into insider trust and do more around DLP this quarter?
  • Can we finally get more visibility into suspicious job applicants and DPRK IT worker activity?

The honest answer is often: “Maybe, but we do not have the analyst hours.” The need to add more human time is always looming. If that sounds familiar, you are not alone.

How the math works

Let’s keep the example simple. We have 10 SOC analysts providing 24x7 coverage. At any given time, that shakes out to roughly 5 analysts “on” during a typical day.

Now we ask a basic capacity question: what is the maximum number of alerts this team can realistically handle in a day?

Assumptions:

  • One SOC analyst works an 8 hour day.
  • You only get about 70 percent of that as true heads-down time once you factor in meetings, breaks, handoffs, and noise.
  • So each analyst gives you about 5.6 hours of actual triage and investigation time per day.

With 5 analysts on, that is about 28 hours of human time available for alerts in a given day.

Now look at a typical day:

  • The SOC sees 200 alerts across EDR, identity, network, and cloud.
  • Not every alert needs deep work. Say 80 percent are quick looks with a service time of about 10 minutes each.

That is:

  • 160 alerts × 10 minutes = 1,600 minutes
  • 1,600 minutes ÷ 60 = 26.6 hours of work

You just spent 26.6 of your 28 available hours on quick triage.

From a capacity standpoint, that is roughly 95 percent utilization just on “fast” work. In queuing terms, anything above about 70 to 80 percent sustained utilization means you are one spike away from a backlog that never really clears.

That leaves roughly 1.4 hours for the remaining 20 percent of alerts that actually need deeper investigation.

Those 20 alerts are the interesting ones:

  • the odd AWS GuardDuty finding for unusual API calls
  • the EDR alert on remote management tooling that feels off
  • the network event someone bookmarked earlier to “dig into when things are quieter”

The problem is, it never really gets quieter. The capacity to go deep on those cases just is not there. Over-tuning creeps in. The pressure to “just close the queue” is always there. Automation helps, but you never quite crack the code to consistent higher level work for your team.

If we assume a modest 2x time increase to investigate those 20 alerts, that is:

  • 20 alerts × 20 minutes = 400 minutes
  • 400 minutes ÷ 60 ≈ 7 hours of work

So you have about 7 hours of deeper investigative work trying to fit into 1.4 hours of available capacity. That gap is where missed detections, shallow investigations, and analyst burnout live. Even more critical, that gap is where adversaries find success.

Your knobs in this model are pretty simple:

  • reduce arrival rate (fewer alerts)
  • reduce service time (handle each faster)
  • increase capacity (more effective analyst hours)

Most teams try to do a little bit of all three. You add people when you can. You tune harder. You throw some SOAR at the problem. None of that is wrong. It is exactly what I would have done in the same position.

But it does raise a fair question: is there a different way to design this system?

Model 2: The outsourced approach (MDR/MSSP) – trading time for dollars

In this next scenario, you hit the in-house capacity wall and turn to an MDR or MSSP partner.

On paper, it is a reasonable trade. You swap the constraint of internal analyst time for budget. Instead of asking “how many more analysts do we need?” the question becomes “how much are we willing to spend?” The new constraint in the system is dollars, plus whatever incentives your provider is operating under.

Again, that is not wrong. It is a rational move.

MDRs are also capacity constrained. They are running the same race you are. That pressure is part of what drove big improvements in orchestration, case management, and “decision support” in the first place. But the underlying constraint of human analyst time never went away.

So MDRs are naturally incentivized to become more efficient with the alerts they already have, not to keep expanding into new areas without charging more. They optimize the work, compress handle times, and keep things inside the bounds of the contract.

How to think about MDR in your model

When you bring an MDR into your capacity model, you change the equation, but you do not get to ignore it. You just move it around:

  • You still need to estimate how many alerts will be sent to them.
  • You still need to estimate how many of their notifications will bounce back to your team.
  • You still need to budget internal hours for:
    • case review
    • tuning and service reviews
    • “can you validate this?” escalations

Your internal equation becomes something like:

Internal hours = time spent coordinating with MDR
+ time reviewing their escalations
+ time chasing activity they could not fully validate

The friction shows up when you want to grow with your own business and the attack surface. You want to:

  • turn on more detections
  • add new data sources
  • go deeper on SaaS, identity, or cloud native threats

And you want to do that without feeling like every incremental step requires a new round of budget justification.

In a lot of environments, MDR absolutely helps get alert management under control. But gaps in context about your environment can leave you and your team chasing their notifications for activity they were not fully confident in. You found budget for MDR. Now you are wondering if you need more budget and more internal time to run down their notifications.

If you are in that spot, it does not mean you made a bad call. It just means you might be running into the edge of that model.

So the same quiet question shows up again: is there a better way to structure this?

Model 3: The Human AI Hybrid SOC – changing the cost curve

Now imagine we take that same 10 person in-house SOC and pair it with specialized AI SOC agents.

This is not “one more tool in the stack.” This is about changing the nature of the work and the shape of your capacity model.

In this model, SOC analysts are not doing first pass triage on every alert that shows up in the queue. Instead, AI agents handle that front line. They review every alert, run the pivots, pull context, and then only escalate when a human should get involved:

  • activity that looks truly malicious
  • behavior that is unusual for the identity, host, or environment
  • situations where the agent is uncertain and wants a human decision

In other words, the AI filters for the kind of problems analysts actually want to work on. You strip out the grind and orient the team around the interesting, high impact work. Less toil. More investigation. More progress for the business.

Rewriting the capacity equation

Back to our fictional 200 alerts per day.

Let’s say the true positive rate is 10 percent (which is generous for most orgs). That means, in reality, about 20 alerts on a given day truly need a human to look at them.

In a Human plus AI model, here is what changes:

  • All 200 alerts are still processed, but now they are handled in minutes by AI agents.
  • The agents investigate, enrich, and make an initial determination.
  • Only the 20 meaningful alerts are escalated to your 5 on shift SOC analysts.

Your human capacity equation becomes:

Human hours = time spent on escalated cases
+ time spent validating and tuning the AI

Let’s keep the numbers consistent:

  • 20 escalated alerts per day
  • 20 to 30 minutes of deeper work per alert

That is:

  • 20 alerts × 25 minutes (split the difference) = 500 minutes
  • 500 minutes ÷ 60 ≈ 8.3 hours of human work per day

Recall that we started out with 28 hours of effective analyst time available.

In utilization terms:

  • 8.3 ÷ 28 ≈ 30 percent utilization for escalated investigations

Even if you layer on:

  • 4 to 6 hours per day for tuning, threat hunting, and collaboration with detection engineering

You are still comfortably below 70 percent utilization. That is where you want to be if you care about:

  • handling spikes without panic
  • having time for project work
  • giving analysts space to think instead of just react

If alert volume spikes overnight and doubles, the agents flex. They do not complain. They do not sleep. Even if the true positive count doubles from 20 to 40 alerts that need human review, you are now at:

  • 40 alerts × 25 minutes = 1,000 minutes
  • 1,000 ÷ 60 ≈ 16.6 hours of human work

16.6 hours out of 28 is still under 60 percent utilization. You still have room to breathe.

The SOC analyst role shifts from “never ending triage” to “reviewer and decision maker.” Capacity modeling shifts from:

“How many analysts per X alerts per day?”

to:

“How many analysts per Y escalated investigations per day, plus hunts and projects?”

That is a very different conversation with your team and with your leadership.

What is the ROI?

You just got back roughly three quarters of your available analyst capacity every day. Not as “free time,” but as redirected time.

Instead of spending their day on repetitive triage, your analysts are now:

  • going deeper on real investigations
  • threat hunting in the areas that matter most to your business
  • partnering with detection engineers on higher value detections
  • finding ways to measurably improve the security of the organization

From a capacity modeling standpoint, you:

  • flattened the human cost curve
  • lowered average utilization to a healthy range
  • built in real headroom for spikes and project work

Parting Thoughts 

By moving to a Human plus AI hybrid model, you are not admitting the old approaches were wrong. You are acknowledging that the environment changed, and your operating model has to catch up.

Go back to the simple math: in the human-only model, your 10-person team has about 28 effective hours a day and burns roughly 26.6 of those just on quick triage of 200 alerts. You are left with barely an hour to go deep on the 20 cases that actually matter. In a Human + AI model, those same 200 alerts still get processed, but agents handle the front-end toil and only the meaningful ~10 percent reach your team. Suddenly you are using that same 28 hours on high-value investigations, hunts, and detection work instead of queue grinding.

That is real ROI: you flatten the human cost curve, reclaim most of your analyst capacity, and point it at reducing risk instead of just surviving the day.

You are not replacing your analysts. You are giving them the space to do the work only they can do, and letting AI carry the parts of the job that were never a good use of human time in the first place.

And for me, that is the real promise of the AI SOC: not just fewer alerts, but a fundamentally better way to use the most constrained, and most valuable, resource you have.

Gartner Report: Innovation Insights - AI SOC Agents

Learn about Gartner's guidance on improving SOC capacity with AI agents

Download Report
Download Ebook
Gartner Report: Innovation Insights - AI SOC Agents

Frequently Asked Questions

Insights
Exit icon