Research17 min read

MTTD vs MTTR: How Internal Security Teams Pair Detection and Remediation Time

MTTD and MTTR are two halves of the same vulnerability lifecycle, and reporting one without the other answers a different question than the audit committee, the regulator, or the engineering leader is actually asking. MTTD measures the elapsed time between a vulnerability becoming present on an asset and the security programme finding it on the live record. MTTR measures the elapsed time between that finding becoming open and the verified closure. The two metrics are throttled by different bottleneck classes, anchored to different upstream clocks, and answer different operational questions, but they reconcile to a single end-to-end lifecycle when paired against the same severity bands and the same observation window.1,3,4,6,7

This research lays out how MTTD and MTTR actually behave inside enterprise vulnerability programmes. It covers the four-state lifecycle the metrics measure, the clock-start choices that decide which numbers are defensible, the per-channel detection latency that headline MTTD usually hides, the cycle-time stages that drive MTTR, the framework anchors that name the SLA windows, the failure modes that produce optically healthy numbers without operating health, and the reporting frame that survives audit scrutiny. The argument is not that one number is better than the other. The argument is that detection time and remediation time are linked operating decisions, and reporting them apart hides which one the programme actually has to fix.5,9,11,13,14,17

The four-state vulnerability lifecycle MTTD and MTTR sit on

MTTD and MTTR measure transitions in a four-state lifecycle. State one is asset-exposed: the vulnerable component is deployed to the asset and is reachable on the affected attack surface. State two is finding-open: the security programme has identified the exposure as an open finding on the live engagement record. State three is fix-verified: the remediation has been deployed and confirmed by retest against the same evidence that opened the finding. State four is finding-closed: the closure has been recorded on the engagement record with the verifying evidence attached.

MTTD is the elapsed time from state one to state two. MTTR is the elapsed time from state two to state four. The total exposure window is MTTD plus MTTR plus any reopen interval, with exception-closure tracked separately so the residual-risk picture remains visible. Programmes that compress the four states into a binary open/closed dashboard lose the ability to attribute lifecycle latency to detection or remediation, and the audit committee question of whether the programme is finding risk fast enough or closing it fast enough has no answer in the data.5,9,17

The boundary that matters most is the state-one-to-state-two transition because the asset-exposed window is where the attacker has time to act before the programme can. State one starts at exposure (deployment of the vulnerable component, configuration drift, or upstream CVE publication for an already-deployed component) rather than at scanner output. The clock-start choice for MTTD is the load-bearing decision for whether the metric measures the actual exposure window or only the part of the exposure window that sits inside the scanner-output stream.

Why one MTTR number is not the same as one MTTD plus one MTTR pair

The headline MTTR figure that most programmes report is the elapsed time from finding-open to finding-closed averaged across severity bands and channels. It is the most legible single number for the audit committee and the most misleading metric in the field when reported alone. MTTR-only reporting assumes that every finding the programme remediates was detected at a comparable cadence, that detection latency is constant across channels, and that a finding present for 90 days before scanner discovery is the same operational picture as a finding discovered within 24 hours of exposure. None of those assumptions hold in practice.

A programme that runs weekly external scans, monthly authenticated scans, and quarterly pentests has three different detection cadences feeding the same remediation pipeline. The MTTR for a finding discovered by the weekly external scan is not the MTTR for a finding discovered by the quarterly pentest even when both close inside the same SLA window, because the asset-exposed clock has a different duration in each case. Pairing MTTD with MTTR per channel and per severity band exposes the difference; collapsing both into a single average hides it.

The same logic applies to KEV escalation. A finding sitting in the open queue for 60 days at medium severity may cross into the CISA Known Exploited Vulnerabilities catalogue tomorrow, at which point the programme has a 14-day BOD 22-01 remediation window starting from the KEV inclusion date. The programme that does not separate KEV-channel MTTD from scanner-channel MTTD has no way to explain to the audit committee whether the new 14-day window was met or missed because the clock for KEV-channel findings is anchored to KEV inclusion rather than to original scanner discovery.1,2,11

Per-channel MTTD: four detection clocks, four numbers

MTTD is rarely one number because the programme rarely has one detection channel. The four channels below run on different upstream clocks, hit different parts of the asset surface, and produce different severity-distribution shapes. Reporting MTTD per channel gives the audit committee a structured read of where the detection coverage actually is rather than a single channel-blind average that hides the silent gaps.

ChannelUpstream clockDefensible MTTD ceiling
Scanner discoveryScan cadence (daily, weekly, biweekly, monthly) plus scan duration plus triage cycle.Scan interval plus 1 to 2 days for triage. Daily scans cap detection at roughly 1 to 2 days; weekly scans cap at 7 to 9 days.
Intelligence cross-reference (KEV, EPSS, vendor advisories)Ingestion cadence of the upstream feed against the open-findings ledger.Daily ingestion caps KEV-channel MTTD at 24 hours plus triage. Weekly ingestion erodes the BOD 22-01 14-day remediation window.
Pentest discoveryEngagement schedule (annual, semi-annual, continuous PTaaS, change-driven).Engagement interval. Annual pentests cap detection at 12 months for the surface only that engagement covers; continuous PTaaS tightens to weekly to monthly.
Disclosure inflow (bug bounty, coordinated disclosure, customer report)Triage SLA on inbound reports plus validation cycle.24 to 72 hours for triage acknowledgement; total MTTD bounded by the reporter, which the programme cannot control.

The four ceilings interact. Programmes that run daily external scans but ingest KEV weekly produce a scanner-channel MTTD that looks healthy and an intelligence-channel MTTD that erodes the BOD 22-01 window. Programmes that run a strong scanner stack but skip authenticated scanning miss the authenticated surface entirely, which presents as a low scanner-channel MTTD only because the silent gap is uncounted. Reporting per-channel MTTD with documented ceilings makes those gaps visible. The security tool coverage overlap research covers the channel-by-class coverage matrix that gives this discipline a structural anchor.1,2,11

Clock-start choices for MTTD: three options, one programme decision

The MTTD clock-start choice is a programme decision that has to be made deliberately because three common anchors produce three different numbers, and audit-committee credibility depends on which choice is documented and held constant across reporting cycles.

1. Publication time (CVE publication or NVD ingestion)

The clock starts when the upstream CVE record is published or ingested into the National Vulnerability Database. Easiest to defend because it is externally verifiable and consistent across organisations. Weakness: it does not capture the exposure window for vulnerabilities that exist before public disclosure (zero-day, custom-component flaws, configuration drift).12

2. Exposure time (deployment of the vulnerable component)

The clock starts when the vulnerable component was first deployed to the affected asset, or when the configuration that exposed the asset was changed. Most operationally honest because it captures the actual exposure window. Weakness: rarely reproducible from the live record without strong asset-version provenance and configuration history, so the data quality has to support the metric definition.

3. Intelligence-promotion time (KEV inclusion or EPSS escalation)

The clock starts when the finding crossed a programme-significant threshold such as inclusion in the CISA Known Exploited Vulnerabilities catalogue or an EPSS score crossing the programme escalation level. Most useful for risk-based prioritisation because it answers the question of how fast the programme reacted to a credible exploitation signal. Only meaningful for findings that crossed a promotion threshold.1,2,11

The defensible discipline is to pick one anchor per channel, document the choice in the metric definition, and hold it constant across reporting cycles. Programmes that re-anchor MTTD between reports either detect a healthy improvement that does not exist or hide a regression that does. Anchoring each channel independently (publication time for scanner-discovery channel, intelligence-promotion time for KEV channel, engagement-end time for pentest channel, report-receipt time for disclosure channel) is the form that survives audit.

MTTR: cycle-time stages drive the closure-side latency

MTTR is the elapsed time from finding-open to finding-closed. The single number is useful only when it is broken into the cycle-time stages each finding actually traverses. The six stages below each have their own bottleneck pattern, their own responsible role, and their own intervention if the cycle time is too long.7,17

StageCommon bottleneck
Triage (open to triaged)Scanner noise, severity calibration disputes, missing duplicate suppression.
Assignment (triaged to owned)Findings routed to a queue rather than a named role; ownership ambiguity.
Investigation (owned to fix designed)Insufficient evidence on the finding, ambiguous affected scope, dependency research.
Remediation (fix designed to fix deployed)Change windows, dependency conflicts, compensating control negotiation.
Verification (fix deployed to retest passed)Retest queue depth, scanner re-run scheduling, manual retest capacity.
Closure (retest passed to closed)Administrative drag, evidence-capture friction, missing closure-record fields.

Median cycle time per stage is more diagnostic than median cycle time per finding. The same headline MTTR can mean a slow triage queue with fast remediation, or a fast triage queue with slow verification, and the two pictures call for opposite interventions. The vulnerability remediation throughput research covers the stage-cycle-time discipline in detail and the five paired metrics that replace MTTR-only reporting.

SLA windows per severity band: external anchors per metric

MTTD and MTTR targets work when they are anchored to external references rather than chosen from internal precedent. Externally anchored windows let the programme report performance against a window the audit committee already understands and that the regulator has already accepted.

SeverityExternal anchorDefensible MTTD ceilingDefensible MTTR ceiling
Known exploited (KEV)CISA BOD 22-01 (US federal civilian agencies; widely adopted private-sector benchmark).24 hours from KEV inclusion if KEV is ingested daily; longer ceilings erode the remediation window.14 days from KEV inclusion or local detection, whichever is earlier.
Critical (CVSS 9.0 to 10.0)PCI DSS Requirement 6.3.3; ISO 27001 Annex A 8.8 risk justification; SSVC act-now classification.1 to 7 days; tighter for internet-facing critical assets given a daily scan cadence.15 to 30 days; tighter where the asset is internet-facing or data-sensitive.
High (CVSS 7.0 to 8.9)PCI DSS Requirement 6.3.3 high-risk window; ISO 27001 Annex A 8.8 cadence justification.7 days for scanner channel given a weekly scan cadence; tighter on daily.30 days; risk-assessment can justify tighter for known-exploit or KEV cross-reference.
Medium (CVSS 4.0 to 6.9)Programme-defined cadence justified by risk assessment; commonly aligned to release cycles.14 days for scanner channel given a biweekly scan cadence.60 to 90 days; cadence rather than countdown is the durable form.
Low (CVSS 0.1 to 3.9)Programme-defined; commonly batched into the next major-version refresh.30 days for scanner channel given a monthly scan cadence.Quarterly cadence or next major release; rolling backlog acceptable if movement is documented.

The reporting form that survives scrutiny is per-severity-band MTTD and MTTR over the observation period, with the count of out-of-SLA closures and the count of expired exceptions surfaced as separate lines. A programme reporting a 14-day MTTR median on KEV findings with three breaches and zero exceptions is in a different operational state than a programme reporting the same median with twelve breaches and forty exceptions, and the leadership read should reflect that distinction.1,2,3,4,10

Six failure modes that produce healthy numbers and unhealthy programmes

MTTD and MTTR are easy to game without intent. The six failure modes below appear in programmes that report healthy numbers while the underlying operational picture is degrading. The fix in each case is a metric-definition discipline rather than a numerical adjustment.

1. Channel-blind MTTD averaging

Reporting one MTTD across scanner discovery, KEV cross-reference, pentest discovery, and disclosure inflow collapses four different operating decisions into one number. The average looks healthy when a fast scanner channel hides a slow KEV ingestion. The fix is per-channel MTTD against the corresponding upstream clock.

2. Clock redrawing between reports

Re-anchoring MTTD or MTTR between reports (publication time one quarter, scanner-output time the next) produces apparent improvements that come from definitional change rather than operational change. The fix is a documented metric definition that holds constant; any change requires an explicit annotation in the report.

3. Severity inflation hiding the tail

Reporting average MTTR across all severity bands lets fast medium and low closures pull the headline down while the critical and KEV tail stays stuck. The fix is per-severity-band reporting with the 90th-percentile tail surfaced alongside the median.

4. Exception inflation

Counting exception closures alongside remediated closures lets fast administrative closures pull MTTR down while risk shifts into the exception register. The fix is to track exception count and exception age as separate trend lines and to keep MTTR scoped to remediated closures only.

5. Reopen invisibility

Closures that fail retest and re-open recorded as new findings reset the MTTR clock and make the headline number healthier than the underlying picture. The fix is to tie reopens to the original finding identifier rather than minting a new one and to surface re-open rate as a paired metric.

6. Silent-gap exclusion

Asset surface that no scanner covers does not generate findings, which keeps the apparent MTTD low while the actual exposure window is unbounded. The fix is documented coverage-overlap analysis so the silent gap is named rather than hidden behind a clean dashboard.

The five paired metrics that replace MTTR-only reporting

Programmes that report MTTD and MTTR in a way that survives the audit committee converge on a small set of paired metrics. The list below is the durable shape of the reporting frame.

1. Per-channel MTTD against the channel cadence

MTTD per scanner channel, KEV channel, pentest channel, and disclosure channel against the corresponding upstream clock. Reads whether each detection channel is meeting the programme cadence commitment.

2. In-SLA MTTR closure rate per severity band

Percentage of findings closed inside the SLA window per severity band against the framework anchor (CISA BOD 22-01 for KEV, PCI DSS 6.3.3 for high, ISO 27001 Annex A 8.8 for the rest). Reads whether the programme is meeting its remediation commitments per severity.

3. End-to-end exposure window (MTTD plus MTTR per band)

Total elapsed time from asset-exposed to finding-closed per severity band. Reads the actual exposure window the attacker had against each finding class. Reconciles MTTD and MTTR into a single lifecycle read for board-level reporting.

4. Exception-to-remediation ratio

Per-period count of exception closures against remediated closures at the same severity band. Reads whether the programme is closing risk or moving it into the exception register. Surfaces the residual-risk picture alongside the lifecycle metrics. The vulnerability acceptance and exception management workflow covers the discipline that keeps the exception register honest.

5. Re-open rate

Percentage of findings closed and then re-opened on retest or rediscovery within a defined lookback window. Reads whether closures are durable. The vulnerability reopen rate research covers the lookback windows, mechanism breakdown, and identifier-discipline pattern that make this metric honest.27

Framework references for MTTD and MTTR

MTTD and MTTR rarely appear by name in compliance frameworks, but the underlying detection and remediation expectations do. The mapping below shows how the major frameworks frame the timing commitment that an MTTD plus MTTR pair operationalises.

FrameworkTiming reference
CISA BOD 22-01Federal civilian agencies must remediate KEV findings within 14 days of inclusion. Widely adopted as a private-sector benchmark.
PCI DSS v4.0 Requirement 6.3.3Critical and high-risk vulnerabilities resolved within one month of identification; lower severities at programme-defined intervals.
ISO 27001 Annex A 8.8Information about technical vulnerabilities obtained in a timely fashion, exposure evaluated, and appropriate measures taken; cadence is programme-defined and risk-justified.
SOC 2 CC7.1Detection of vulnerabilities through ongoing monitoring; auditors test the cadence and the remediation cycle.
NIST SP 800-53 RA-5 plus SI-2RA-5 covers vulnerability monitoring and scanning cadence; SI-2 covers flaw remediation timeline. Together they specify a detection plus remediation operating commitment.
NIST CSF 2.0 Detect plus RespondDetect function captures the MTTD-equivalent expectation; Respond function captures the MTTR-equivalent expectation. The functions pair across the lifecycle.
CIS Controls v8 Control 7Continuous Vulnerability Management names a monthly scan cadence as a baseline expectation; tighter cadence is risk-justified.

A programme that names per-channel MTTD against scan cadence and intelligence ingestion, plus in-SLA MTTR per severity band against the framework anchors above, plus the exception-to-remediation ratio, answers the timing question for every framework that the audit committee or regulator is likely to apply, in the same record rather than as separate documents per framework.1,3,4,6,7,8,9,17

MTTD and MTTR are not the same as MTTI and MTTC

MTTD as used in vulnerability management measures the elapsed time between a vulnerability becoming present on an asset and the programme finding it on the live record. This is different from MTTI (mean time to identify) and MTTC (mean time to contain), which come from incident response practice and measure attacker-activity detection and containment after a breach is in progress. The IBM Cost of a Data Breach Report uses MTTI and MTTC for breach-lifecycle analysis; NIST SP 800-61 covers the incident response lifecycle that those metrics sit on.15,16

Programmes that operate both vulnerability management and incident response keep the metric definitions distinct so the incident response MTTI and the vulnerability programme MTTD are not conflated when reported to the same audit committee. A programme reporting a 24-hour incident MTTI and a 14-day KEV MTTD is reporting two different operating disciplines on two different lifecycle frames; collapsing them into one detection-time figure misreads both.

How the engagement record carries MTTD and MTTR

MTTD and MTTR numbers get cleaner when the lifecycle gates (asset-exposed, finding-open, fix-verified, finding-closed) live on the same engagement record the operational work lives on, rather than on a metrics layer that is reconstructed from spreadsheets after the fact. The platform does not set the MTTD or MTTR targets for the programme, but it does make the metric definitions reproducible from the live record at any moment between reporting cycles.

SecPortal pairs every finding to a versioned engagement record through findings management. CVSS 3.1 vector, severity band, owner, evidence, and remediation status are captured on the finding record so the per-severity-band reporting is one query against the same place the work is done.20 The activity log captures the timestamped chain of state changes by user, so the elapsed time between any two lifecycle states is a query against the live record rather than a reconstruction from email threads.21

The continuous monitoring feature schedules external, authenticated, and code scans on daily, weekly, biweekly, or monthly cadences so the scanner-channel MTTD ceiling is observable from the schedule decision rather than inferred from the dashboard.23 Authenticated scanning runs against the surface that requires credentials with AES-256-GCM encrypted credential storage, so the authenticated MTTD does not silently degrade when authentication fails. The scanner authentication failure modes guide covers the failure-mode discipline that keeps the authenticated channel from going silent.

The compliance tracking feature maps findings and controls to ISO 27001, SOC 2, Cyber Essentials, PCI DSS, and NIST frameworks with CSV export so the in-SLA closure rate per framework anchor is one query against the same record.22 The AI report generation workflow produces remediation roadmaps and compliance summaries from the same engagement data, so the leadership read of MTTD and MTTR matches the operational read.24

The vulnerability SLA management workflow, remediation tracking workflow, and scanner result triage workflow keep the open-finding queue, the SLA windows, the triage cycle, and the closure record on the same engagement record. The platform does not replace SIEM-grade attacker-activity detection or EDR-grade endpoint monitoring; those operate against attacker behaviour, while SecPortal operates against vulnerabilities present on assets.25

For internal security and vulnerability management teams

Internal security teams and vulnerability management leads carry the MTTD and MTTR question between audits. The pattern that survives reporting cycle after reporting cycle is to operate per-channel detection discipline and per-stage remediation discipline on the same record, capture lifecycle transitions as a side effect of the work rather than as a separate metrics project, and keep the exception axis visible alongside the timing axis.

  • Report MTTD per channel rather than per programme so the silent-gap question is answerable.
  • Document the MTTD clock-start choice per channel and hold it constant across reporting cycles.
  • Anchor MTTR SLA targets to external references (CISA BOD 22-01, PCI DSS 6.3.3, ISO 27001 Annex A 8.8) rather than internal precedent.
  • Pair MTTD with MTTR at the same severity bands so the lifecycle reads as one record.
  • Track exception closure separately from remediated closure so the residual-risk profile is not hidden inside the headline numbers.
  • Capture re-opens against the original finding identifier so closure durability is measurable.
  • Surface the 90th-percentile tail alongside the median so the SLA breaches are visible on the same chart as the average.

For internal security teams, vulnerability management teams, AppSec teams, and security engineering teams, the operating commitment is to keep the MTTD and MTTR pair reproducible from the live record at any moment in the reporting cycle, not only at quarterly review week.

For security leadership and audit committees

Security leaders and audit committees read MTTD and MTTR through a different lens than operational teams. The leadership read is whether the programme is durably finding risk fast enough and closing it fast enough across reporting cycles, not only whether the headline figures fell this quarter. A programme that hits the SLA on closures while accumulating exceptions, growing the open queue, or running with a silent coverage gap is technically meeting its commitment and substantively increasing residual risk. The leadership question is which of those two pictures the metric is actually telling.

  • Track per-channel MTTD, per-severity-band MTTR, end-to-end exposure window, exception count, and re-open rate as five separate trend lines rather than as one composite score.
  • Read the direction of each trend over twelve months as a programme health signal independent of in-period values.
  • Surface exception register growth as a residual-risk indicator alongside the timing metrics, not separate from them.
  • Ask for the per-channel MTTD breakdown when MTTR is healthy but the open queue is growing; the channel breakdown shows where the silent gap is.
  • Tie the timing numbers to the same engagement record the audit evidence comes from so the leadership read and the audit read are the same record rather than two reports.

The leadership-side platform discipline that supports this is covered on SecPortal for CISOs and security leaders and security operations leaders. The audit evidence half-life research covers the evidence-currency side of the same operating discipline; the security debt economics research covers the financial frame the same lifecycle metrics roll up into; and the vulnerability management maturity model research places the MTTD plus MTTR discipline on the maturity grid as the load-bearing distinction between Level 3 and Level 4 on the leadership reporting dimension.28,29,30

The security leadership reporting workflow keeps the timing metrics, the exception register, and the framework crosswalks on the same record so the audit-committee report and the engineering-leader report draw from one source of truth.

Conclusion

MTTD and MTTR are linked operating decisions, not standalone numbers. Detection time is gated by scan cadence, intelligence ingestion, and coverage discipline; remediation time is gated by triage, ownership, change windows, and retest capacity. Reporting one without the other answers a different question than the audit committee actually asks. The defensible discipline is per-channel MTTD against documented upstream clocks, per-severity-band MTTR against external SLA anchors, end-to-end exposure window per band, exception ratio, and re-open rate, all sitting on the same engagement record so the leadership read and the operational read match.1,3,4,5,6,7,8,9,17

Treating the MTTD plus MTTR pair as a property of the live engagement record rather than as a metrics layer reconstructed from spreadsheets is the highest-leverage discipline in vulnerability programme reporting between audits. It keeps the leadership read and the operational read on the same record, it survives reporting-cycle rotation, and it makes the budget conversation about scanner cadence, intelligence ingestion, triage capacity, and retest capacity argued from the same evidence as the audit conversation about SLA performance. The platform you use does not have to write the MTTD or MTTR targets for the programme. It does have to make the metric definitions reproducible and the lifecycle chain self-documenting.

Frequently Asked Questions

Sources

  1. CISA, Binding Operational Directive 22-01: Reducing the Significant Risk of Known Exploited Vulnerabilities
  2. CISA, Known Exploited Vulnerabilities Catalog
  3. PCI Security Standards Council, PCI DSS v4.0 Requirement 6.3.3
  4. ISO/IEC, ISO 27001:2022 Annex A 8.8 Management of Technical Vulnerabilities
  5. NIST, SP 800-40 Rev. 4: Guide to Enterprise Patch Management Planning
  6. NIST, SP 800-53 Revision 5: RA-5 Vulnerability Monitoring and Scanning
  7. NIST, SP 800-53 Revision 5: SI-2 Flaw Remediation
  8. AICPA, SOC 2 Trust Services Criteria CC7.1 Detection of Vulnerabilities
  9. NIST, Cybersecurity Framework (CSF) 2.0 Detect and Respond Functions
  10. CISA, Stakeholder-Specific Vulnerability Categorization (SSVC)
  11. FIRST, EPSS Exploit Prediction Scoring System Documentation
  12. NIST, NVD National Vulnerability Database
  13. NCSC, Vulnerability Management Guidance
  14. OWASP, Vulnerability Management Guide
  15. IBM Security, Cost of a Data Breach Report (MTTI and MTTC reference)
  16. NIST, SP 800-61 Rev. 2: Computer Security Incident Handling Guide
  17. CIS, CIS Controls v8: Control 7 Continuous Vulnerability Management
  18. ENISA, Good Practices for Vulnerability Disclosure and Coordination
  19. OASIS, Common Security Advisory Framework (CSAF)
  20. SecPortal, Findings & Vulnerability Management
  21. SecPortal, Activity Log & Workspace Audit Trail
  22. SecPortal, Compliance Tracking
  23. SecPortal, Continuous Monitoring
  24. SecPortal, AI-Powered Security Reports
  25. SecPortal, Vulnerability SLA Management Use Case
  26. SecPortal Research, Vulnerability Remediation Throughput
  27. SecPortal Research, Vulnerability Reopen Rate
  28. SecPortal Research, Audit Evidence Half-Life
  29. SecPortal Research, Vulnerability Management Programme Maturity Model
  30. SecPortal Research, Security Debt Economics

Run MTTD and MTTR on the live engagement record

SecPortal keeps findings, scan cadence, retests, exceptions, and SLA mappings paired to one versioned engagement record so the per-channel MTTD and per-severity MTTR are reproducible at any moment between reporting cycles and the lifecycle chain does not depend on a metrics layer that diverges from operational reality.