Mean Time to Detect (MTTD): Definition, Formula, and Why the Metric Fails in Practice

April 23, 2026

MTTD is one of the most-tracked SOC metrics and one of the least precisely defined. Ask three SOC leaders how they measure mean time to detect and you’ll get three different answers — sometimes measured from the earliest telemetry signal, sometimes from alert generation, sometimes from analyst confirmation. The variance matters because MTTD isn’t just a reporting metric. It drives investment decisions, staffing arguments, and vendor evaluations. When the definition shifts, so does the credibility of everything those decisions were based on.

This post walks through what MTTD actually measures, the formula used in most SOCs, where the traditional calculation produces misleading results, and how AI-driven investigation changes both the math and the operational meaning of the metric.

What Is Mean Time to Detect (MTTD)?

Mean Time to Detect is a security operations metric that measures the average time between when malicious activity begins and when the SOC detects it. It’s one of the core metrics in the broader SOC metrics framework, alongside MTTR (mean time to respond) and MTTI (mean time to investigate). MTTD tracks the attacker’s detection-free dwell window, which is the period during which an attacker can move laterally, escalate privileges, or wipe logs before your team knows they’re there.

Cut the lag and you shorten the attacker’s window. Let it grow and you increase the probability of a significant incident. That’s the operational reason MTTD matters. The reason MTTD is hard to measure is more subtle: the definition of “detected” varies, and most organizations haven’t settled on a consistent one.

How to Calculate MTTD (and Why the Formula Is the Easy Part)

The standard MTTD formula is straightforward. Subtract the time the malicious activity began from the time the alert fired for each incident, sum across incidents, and divide by total incidents.

MTTD = (Σ (Alert Time − Activity Start Time)) ÷ Number of Incidents

The problem isn’t the arithmetic. It’s the inputs. “Activity start time” requires retrospective analysis to determine, and most SOCs only have this information for incidents that were investigated deeply. “Alert time” is straightforward when you define detection as “the first alert fired,” but less straightforward when you define it as “the alert that led to investigation” or “the alert that led to the incident declaration.” Different definitions produce different numbers, and organizations reporting MTTD often don’t document which definition they’re using.

{{ebook-cta}}

MTTD vs. MTTR: What Each One Actually Measures

MTTD and MTTR are often reported together and often confused for each other. The distinction is operationally important.

MTTD measures the time from the start of malicious activity to the point of detection. It’s about the SOC’s ability to see what’s happening, and it’s bounded by telemetry coverage, detection quality, and the time analysts take to acknowledge alerts. MTTR measures the time from detection to containment or resolution — the response phase, not the detection phase. A SOC can have excellent MTTD and poor MTTR, or vice versa, and the two metrics typically trace to different root causes. Shortening MTTD requires better detection logic, broader telemetry, and faster triage. Shortening MTTR requires better response workflows, automated containment, and clearer incident handoffs.

Reporting them as a combined metric — “our mean time to detect and respond is X hours” — obscures which phase is the bottleneck. That matters because the fix for each is different. For teams pushing investigation time down to minutes, the MTTR reduction guide covers the response-side steps in detail.

Why Traditional MTTD Measurements Can Be Misleading

Traditional MTTD metrics stop at alert generation. An alert fires; the clock stops. That works as a reporting convenience but produces a number that doesn’t reflect whether the SOC actually detected anything in an operationally meaningful sense.

Consider the failure mode: an alert fires within seconds of malicious activity, and the alert sits unreviewed in a queue for six hours before an analyst looks at it. Traditional MTTD reports this as a near-zero detection time. Operationally, the attacker had a six-hour window to act before the organization had any awareness. The metric looks good. The SOC’s actual detection performance was poor. Detection happened when an analyst confirmed the threat, not when the alert fired.

The same failure mode plays out in reverse. A SOC with aggressive alert tuning and high-fidelity detections may fire fewer alerts, which means traditional MTTD has fewer incidents to average and can skew cleaner than the actual detection posture warrants. A SOC that fires on everything has more alerts, better raw MTTD on paper, and worse real-world performance because analysts can’t triage the volume.

The metric’s usefulness depends on whether the definition captures operational reality or just the moment a rule fired.

Why MTTD Drifts in Practice

Several factors drive MTTD drift: telemetry coverage gaps create blind spots where attackers operate undetected for longer periods; tool sprawl slows triage because analysts context-switch across consoles to investigate individual alerts; alert fatigue desensitizes analysts to high-volume low-fidelity alerts, and genuine threats buried in that volume go unreviewed; and understaffing means even well-tuned detection stacks produce queues that alerts sit in before anyone looks at them. These aren’t independent factors. A SOC with telemetry gaps often runs broader detections to compensate, which creates alert fatigue, which extends analyst review time, which pushes MTTD in the wrong direction.

How to Measure MTTD in a Way That Reflects Reality

The more useful approach is to track multiple MTTD variants rather than a single headline number. Alert-to-acknowledgment measures how long it takes an analyst to start reviewing a fired alert. Alert-to-confirmation measures how long it takes the analyst to confirm or rule out the threat. Activity-to-detection measures from the start of the malicious activity to the point of confirmed detection, which requires retrospective investigation to establish. Each variant tells you something different about where the bottleneck lives. A gap between alert-to-acknowledgment and activity-to-detection means your detection logic is catching things late. A gap between alert-to-acknowledgment and alert-to-confirmation means your triage is slow even once alerts are seen.

Benchmarks vary by industry risk profile. High-risk sectors like finance and critical infrastructure often target alert-to-confirmation under 30 minutes for priority alert categories. Less regulated environments may accept several hours. What matters more than the specific threshold is that the number reflects how long threats actually go undetected, not just how long a rule takes to fire.

How AI Changes the MTTD Calculation

When alert review no longer depends on human throughput, the alert-to-acknowledgment and alert-to-confirmation components of MTTD collapse. An AI SOC analyst investigates every alert at the moment it fires: pulling evidence, correlating across data sources, and producing a documented determination in minutes rather than hours. The activity-to-detection variant still depends on detection logic quality and telemetry coverage, which AI investigation doesn’t directly improve. But the portion of MTTD that was previously bounded by analyst queue depth stops being the limiting factor.

The structural shift matters because it changes what MTTD benchmarks should be. When the metric was bounded by human review capacity, a multi-hour MTTD was defensible — you couldn’t review faster than your analysts could work. When AI handles the review step at full investigation depth, multi-hour acknowledgment times stop being a throughput issue and start being a tool choice. That reframes MTTD from a staffing question to a detection quality and tooling question.

What to Measure Going Forward

The most useful MTTD reporting in an AI-driven SOC separates what’s bounded by detection logic from what’s bounded by investigation throughput. Detection logic questions show up as activity-to-detection gaps: the SOC couldn’t see the threat fast enough because the underlying rules, telemetry, or coverage didn’t capture it. Investigation throughput questions show up as alert-to-confirmation gaps: the SOC saw it but couldn’t review it fast enough. When you know which is which, you know what to invest in.

For most SOCs running traditional MTTD reporting, the first step is to stop reporting a single number and start reporting the variants separately. The second step is to audit which variant is the actual bottleneck. That audit almost always changes the investment conversation. Teams rethinking their measurement approach more broadly may also find the breakdown in 5 things to measure in an AI-driven SOC useful alongside the MTTD variants above.

See how Prophet AI investigates every alert at the same depth, reducing alert-to-confirmation time to minutes across 100% of alert volume. Request a demo.