SOC Capacity Modeling: How Many Alerts Can Your Team Really Handle?

Jon Hencinski
Jon Hencinski
December 16, 2025

Let’s try something simple.

How much of your SOC capacity did you actually use last week or last month just to keep the alert queue under control?

Is there plenty of room left to turn on more detections, loosen some over-tuned logic, expand into insider threat and DLP, and keep up with employee-reported phishing?

Or is the team barely keeping up?

If you are not entirely sure, but you are seeing early warning signs of alert fatigue and burnout in 1:1s or performance reviews, you are not alone. You may not be able to see the capacity issues on a dashboard, but you can feel them. And it is totally normal to not be sure what to do next.

We are here to help. Try this.

Step 1: Estimate your available capacity

Start with a simple model.

Say you have 5 analysts working per day (swap this number for your own team).

Each analyst works an 8 hour shift, but you only get about 70% of that as productive time once you account for breaks, meetings, 1:1s, handoffs, and general noise. That gives you roughly 5.6 hours

{{ebook-cta}}

You do not have just one analyst, you have 5.

So: 5.6 hours × 5 analysts = 28 hours of total SOC capacity per day.

Now ask: how much of that analyst capacity do we want to direct to alert triage?

100%? 75%?

Let’s say we dedicate 60% (i.e, more than half) of available capacity to alert triage. That gives you about 17 hours per day for handling alerts, and leaves the remaining 40% for threat hunting, tuning and filtering, detection engineering, and research.

Write that number down. This is your alert triage capacity.

Step 2: Look at your arrival rate

Next, look at how often work shows up. Rough numbers are OK here.

How many alerts land in the queue and actually need handling by your SOC each day?

For our example, we will use 100 alerts per day. This is not a made-up number; it is in line with what we see from Prophet AI customers. Across all customers, a typical (around the 75th percentile) organization generates about 100 alerts per day.

So, our arrival rate is: 100 alerts per day.

Step 3: Approximate your service time

Now we need one more piece: service time, or how long it takes to work each alert.

We are going to use back-of-the-napkin math, but if you have the data, you can do this precisely by looking at alert open and close timestamps, calculating the deltas, and then looking at medians and 75th percentiles. There is a ton you can learn about your alert handling just from those distributions.

To keep things simple, let’s assume the average alert takes 7 minutes of analyst time.

Of course, not all alerts are created equal. In reality, most will be quick triage, and a small percentage will need deeper investigation. For now, imagine something like:

  • 90% of alerts triaged and closed in about 5 minutes
  • 10% that turn into 20–25 minute investigations

On average, that blends out to roughly 7 minutes.

For our simple model:

  • Arrival rate = 100 alerts per day
  • Service time ≈ 7 minutes per alert

So: 100 alerts × 7 minutes = 700 minutes of work per day.

700 minutes ÷ 60 ≈ 12 hours of alert triage work showing up per day.

Earlier, we said we had 17 hours of capacity earmarked for triage.

That means in this fictional setup, utilization is around 70%.

Step 4: Sit with what 70% really feels like

Seventy percent loaded is right around the point where the team starts to feel the strain. There is enough work to keep everyone busy all the time, but not enough slack to absorb spikes, go deep on interesting leads, or confidently expand detection coverage.

The opportunity to add more detections starts to feel tight. Detection engineering creativity quietly becomes limited by human capacity.

First off: if you got this far and built even a rough view of your capacity vs. utilization, that is a big win. Most teams never put numbers to what they feel.

Armed with this data, it is natural to think:

  • maybe we add some headcount
  • tune the SIEM harder
  • throw more automation at the problem

All of those are fair, reasonable steps.

But there is another question we can ask:

If the bottleneck in this system is clearly human capacity, what would it look like to exploit that constraint using AI agents to handle most of the alert triage?

That is where things start to get interesting.

Rewriting the capacity equation

Now take that same setup: 5 analysts, 28 effective hours per day, about 17 of those hours earmarked for alert triage, and roughly 100 alerts a day coming in.

Let’s say your true positive rate is around 10% (which is actually pretty good). That means on a typical day, only about 10 of those alerts truly need a human to dig in.

In a Human + AI model, AI agents sit in front of the queue. They see all 100 alerts, run the pivots, pull the context, and make an initial determination. Only the ~10 alerts that really need human judgment get escalated to your team.

If those 10 escalations each take around 20–25 minutes of deeper work, that is roughly 4 hours of analyst time per day. Against your 17 hours of triage capacity, you are now using about 25% of that slice on high-value investigations instead of ~70% on grinding through everything.

You can see it at a glance:

Metric Human-only SOC Human + AI SOC
Analysts on shift 5 5
Effective hours per day 28 28
Hours for alert triage 17 17
Alerts per day 100 100
Human hours on alerts ~12 ~4
Utilization of triage hrs ~70% ~25%

Same team. Same 100 alerts per day. Completely different load on your analysts.

And if volume spikes and you see 200 alerts instead of 100, the agents still absorb the front end. Maybe 20 alerts get escalated. That is about 8 hours of human work, still comfortably inside your 17-hour triage budget, with room left for hunts, tuning, and project work.

The SOC analyst role shifts from “touch every alert” to “reviewer and decision-maker on the ones that matter.” You keep the same team, but you reclaim most of their time from repetitive triage and redirect it into investigations, threat hunting, and improving detections, without having to double headcount or spend more just to stand still.

You do not have to overtune the system or hold back that new detection because you are worried about volume. You have space for it now. Your analysts are spending more time on higher-value work. Morale is up. Creativity is up. That uneasy activity that “just does not feel right” finally gets the attention it deserves.

Your team is not fighting with the alert queue anymore. The queue is working for them.

Parting Thoughts

Let’s bring this home.

All of this really comes down to doing a little bit of honest math and being willing to look at what it tells you.How many effective analyst hours do you have in a day, how many alerts show up, and how long do they really take to handle. Once you see those numbers, you can stop guessing about why the team feels stretched and start naming it as a capacity problem.

In a human-only model, that math usually says most of your time is going to just keeping the queue under control, with very little left for deep work, hunts, or new detections. In a Human plus AI model, the same team and the same 100 alerts look very different. Agents absorb the front end and only escalate what truly needs a human, so your limited hours get pointed at the 10 or 20 cases that matter instead of the entire firehose.

That is how you change the ball game. Not by working harder or tuning until everything is silent, but by redesigning how work flows through the SOC so human time is treated as the scarce, high value resource it is. Run the numbers for your own SOC, see where you are, and then decide how you want AI to help you tilt that equation in your favor.

Gartner Report: Innovation Insights - AI SOC Agents

Learn about Gartner's guidance on improving SOC capacity with AI agents

Download Report
Download Ebook
Gartner Report: Innovation Insights - AI SOC Agents

Frequently Asked Questions

Insights
Exit icon