Patch Cycle vs Remediation SLA Mismatch: How Internal Teams Reconcile Two Clocks Against One Finding
Every open finding has two clocks running against it: the patch cycle on the IT or platform side (when the vendor patch becomes available, when the change window opens, when the maintenance window deploys it, when the validation rescan confirms it) and the remediation SLA on the security side (the closure window the framework or programme policy commits to, anchored to CISA BOD 22-01 for KEV, PCI DSS Requirement 6.3.3 for critical and high, ISO 27001 Annex A 8.8 for risk-justified cadence). The two clocks rarely align by accident; they run on different upstream levers, get scheduled by different teams, and report into different governance forums. When the SLA window is tighter than the patch cycle that drives the fix, the finding misses the window even when every team executes their part correctly.1,3,4,5,6,7
This research lays out how the patch-cycle-versus-remediation-SLA mismatch actually behaves inside enterprise vulnerability programmes. It covers the two-clock frame, the six common cadence-mismatch scenarios, the per-severity-band lag math that surfaces the structural gap, the failure modes that produce healthy headline closure rates while the underlying programme misses framework windows, the reporting frame that pairs three lag metrics against the SLA window, and how the mismatch sits alongside the existing throughput, capacity, and lifecycle research. The argument is not that patches should ship faster on the IT side or that SLAs should loosen on the security side. The argument is that the cadence question is upstream of the closure-rate conversation, and reading the two clocks together rather than separately is the highest-leverage discipline in vulnerability programme reporting between audits.1,2,9,10,15
The two-clock frame the mismatch sits on
A vulnerability programme runs against two timing systems that operate against the same finding from different sides of the organisation. The patch cycle is the cadence on which a vendor patch becomes available, gets bundled into a change ticket, and ships through a maintenance window; the cycle is gated by vendor release schedules, dependency conflicts, regression-testing windows, and change advisory board approval. The remediation SLA is the cadence the security side commits to for closing a finding once it is detected; the cadence is gated by external anchors (CISA BOD 22-01 for KEV, PCI DSS Requirement 6.3.3 for critical and high, ISO 27001 Annex A 8.8 for the risk-justified rest) and by internal SLA policy.
The two cycles run on different upstream levers. The patch cycle is driven by vendor advisory cadence, change-management policy, maintenance-window schedule, dependency-graph fragility, and regression-test capacity. The remediation SLA is driven by framework expectations, severity-band assignment, internal policy, and the audit committee read of programme health. Programmes that report only the closure rate collapse the two cycles into one number; programmes that report the cadence gap explicitly surface which lever the programme actually has to pull when the queue grows.5,6,7
The mismatch is rarely uniform. A vendor that ships out-of-band patches inside one week of disclosure for KEV-eligible findings produces a different mismatch profile than a vendor that ships only on a quarterly cycle. A change advisory board that meets weekly produces a different change-window lag than one that meets fortnightly. A scanner that runs daily against the affected asset produces a different validation-rescan lag than one that runs monthly. Reading the mismatch as a single average hides the per-vendor, per-asset-class, per-severity-band picture that names the actual lever; reading it as three lags across the lifecycle puts the lever in the open.
Six common cadence-mismatch scenarios
The scenarios below show up repeatedly across enterprise programmes. Each has a different upstream lever and a different intervention; reading them as one mismatch number hides which lever is moving and where the operating change has to land.
| Scenario | Where the lag lives | Intervention |
|---|---|---|
| Vendor patch outside SLA window | Vendor advisory cadence is monthly or quarterly while the SLA is fourteen days for KEV; patch availability is the bottleneck. | Vendor management escalation, contractual SLA negotiation, compensating-control posture by default for the vendor, alternative-product evaluation. |
| Patch available, change window after SLA | Patch is available inside the SLA but the next maintenance window opens after expiry; change-window cadence is the bottleneck. | Emergency-change path activation, change-cadence retune for KEV and critical, named change-window escalation policy. |
| Patch available, dependency conflict blocks deployment | Patch ships but requires upgrading a downstream component that has its own regression-testing window; dependency-graph fragility is the bottleneck. | Dependency-graph hardening, regression-test fast lane for security-driven upgrades, compensating-control posture during the dependency window. |
| Patch deployed, validation rescan queued past SLA | Fix ships inside the SLA but the next scanner run against the affected asset lands after expiry; rescan cadence is the bottleneck. | Targeted post-deployment rescan trigger inside the SLA window, rescan-cadence-by-asset-class retune, evidence-attachment automation. |
| Out-of-band vendor patch faster than change cadence | Vendor ships an emergency patch overnight but the change advisory board treats it as a normal change; emergency-change path is dormant. | Standing emergency-change classification for KEV-aligned findings, pre-approved exception register for vendor out-of-band advisories. |
| Patch deployed, change ticket and finding never reconciled | IT closes the change ticket; security-side finding stays open beyond the SLA on the audit record; reconciliation discipline is the bottleneck. | Linked change-ticket and finding-record convention, named owner on each side, reconciliation as a continuous record rather than a periodic project. |
The scenarios overlap in practice. A single finding can have a slow vendor patch (scenario one), a queued change window (scenario two), and a delayed validation rescan (scenario four) running on the same record. The defensible reading is per-stage lag attribution rather than an aggregate slip number; without the per-stage breakdown, the programme cannot tell which scenario it is actually carrying. The patch management coordination workflow covers the operational discipline that pairs the change ticket and the finding record on a single live record so the per-stage lag is observable rather than reconstructed.30
Per-severity-band lag math: three clocks against one window
The mismatch math is severity-band-specific because severity bands carry different SLA windows and different vendor-cadence expectations. The table below pairs the framework SLA window per band with the typical patch cycle for the band and the lag pattern programmes commonly carry.
| Severity | SLA window anchor | Typical patch cycle and lag |
|---|---|---|
| Known exploited (KEV) | CISA BOD 22-01 fixes the closure window at fourteen days from KEV inclusion for federal civilian agencies; widely adopted as the private-sector benchmark. | Major vendors frequently ship out-of-band patches inside one week of KEV inclusion; the bottleneck is usually change-window cadence and the change advisory board emergency path rather than vendor patch availability. |
| Critical (CVSS 9.0 to 10.0) | PCI DSS Requirement 6.3.3 fixes one month for critical and high; SSVC act-now classification reads the same window. | Vendor cadence at the critical band is usually monthly (Patch Tuesday for Microsoft, Critical Patch Update for Oracle); the lag depends on which day inside the cycle disclosure happens, with worst-case patch-availability lag near thirty days. |
| High (CVSS 7.0 to 8.9) | PCI DSS Requirement 6.3.3 high-risk window; ISO 27001 Annex A 8.8 cadence justification; commonly paired with internal sixty-day SLA. | Vendor cadence typically aligns to the next regular release; cumulative lag (patch availability plus change window plus validation rescan) often consumes most of the SLA, leaving little buffer for dependency-conflict resolution. |
| Medium (CVSS 4.0 to 6.9) | Programme-defined cadence (commonly ninety-day SLA); ISO 27001 Annex A 8.8 risk-justified pace. | Vendor cadence often aligns to the next minor release; backports may not be issued for some products; the lag is dominated by release cadence rather than change-window scheduling. |
| Low (CVSS 0.1 to 3.9) | Programme-defined; commonly batched into the next major-version refresh. | Vendor cadence often aligns to the next major version; the lag is unbounded for products without active backporting; programmes usually batch low-band closures into the regular release cadence rather than tracking against an SLA. |
Reading the SLA window and the patch cycle together by band surfaces the structural gap. The KEV band is the band most often missed because the fourteen-day window is tight enough that change-window cadence becomes the binding constraint even when vendor patches are available quickly. The critical band is the band most exposed to dependency-conflict slip because the thirty-day window is long enough to make patches available but short enough that one regression-test cycle can consume the buffer. The high band is the band most exposed to silent SLA breach because the cumulative lag often consumes most of the window without any single stage being clearly out of cadence; programmes that do not track per-stage lag cannot intervene before the slip.1,3,4,10
Six failure modes that produce healthy headlines and missed windows
The patch-cycle-versus-SLA mismatch is easy to underreport without intent. The failure modes below appear in programmes that report headline closure rates near plan while the underlying programme is missing framework windows and accumulating audit-readable risk. The fix in each case is a counting discipline rather than a numerical adjustment.
1. Closure date counted at change-ticket close, not at rescan-passed
Treating the change ticket as the closure event closes findings on paper before the validation rescan has confirmed the fix worked. Headline closure rate looks healthy and the reopen rate climbs one or two cycles later when the rescan or the next scheduled scan finds the underlying issue still present. The fix is to define closure as retest-passed only and to read the vulnerability reopen rate research as the paired durability metric.
2. Patch availability lag counted only at deployment
Counting the patch cycle from deployment-decision to deployment-complete misses the lag from finding-detected to vendor-patch-available, which is often the largest stage. Programmes that report only the change-window lag argue capacity from the IT side without seeing that vendor risk is the actual driver. The fix is to track patch availability lag as a separate stage with the per-vendor breakdown.
3. Validation rescan as administrative overhead
Treating the validation rescan as administrative tail rather than as a first-class lifecycle stage means the rescan cadence is not aligned to the SLA window. A patch that ships on day twelve of a fourteen-day SLA waits up to seven days for the next weekly scan to confirm closure, which lands the closure event past the SLA on the audit record. The fix is to schedule a targeted post-deployment rescan inside the window, not to treat the next scheduled scan as the validation event.
4. Compensating controls implied rather than named
Generic claims that compensating controls are in place during the patch-cycle queue are gaps the audit reads as absent controls. Specific WAF rules, network-segmentation references, configuration changes, and detection rules need to be documented on the finding with the specific rule, segment, or query reference. Without the named control, the residual-exposure window is unbounded on the audit record and the SLA breach is unmitigated by evidence.
5. Emergency-change path defined on paper but rarely invoked
The emergency-change classification exists in change-management policy but defaults to the regular cadence in practice because invoking it forces an ad hoc CAB meeting and the operational pressure to wait for the next regular meeting is high. The fix is standing emergency-change classification for KEV-aligned findings with pre-approved deployment paths, so the cadence change is policy rather than judgement-call.
6. Vendor lag attributed to internal capacity gap
When the patch availability lag is concentrated in a small number of vendors but the report aggregates across the estate, the lag presents as an internal capacity issue and the budget conversation defaults to adding remediation capacity downstream. Adding capacity to a vendor-lag-bottlenecked queue does not change the closure rate. The fix is per-vendor patch availability lag breakdown so vendor risk is visible to vendor management rather than mistaken for security capacity gap.
The three-lag reporting frame that survives audit committee scrutiny
Programmes that report patch cycle and remediation SLA in a way that survives the audit committee converge on a small set of paired lag metrics. The list below is the durable shape of the reporting frame; each metric is a separate trend line rather than a composite score, and each is anchored to an external framework window.
1. Patch availability lag per severity band per vendor
Median time from finding-detected to vendor-patch-available, broken out by KEV, critical, high, medium, and low, with the per-vendor breakdown. Reads vendor risk distinct from internal capacity gap. A vendor with sustained high availability lag at the critical band is a third-party risk record, not a security capacity ask.
2. Change-window lag per severity band
Median time from patch-available to patch-deployed, broken out by severity band and asset class. Reads whether change cadence is compatible with the SLA window. KEV-band change-window lag near or above fourteen days indicates the emergency-change path is not being used; critical-band change-window lag near thirty days indicates the regular cadence is the binding constraint.
3. Validation rescan lag per severity band
Median time from patch-deployed to rescan-passed-closure. Reads whether validation cadence is aligned to the SLA window. Programmes that schedule weekly scans against critical assets carry up to seven days of validation lag; programmes that run continuous scans carry less. The validation lag is the stage most often invisible on the IT side and most exposed on the audit side.
4. SLA-breach attribution per stage
For findings that breached the SLA in the observation window, the breakdown of which stage carried the slip (vendor availability, change window, validation rescan, reconciliation). Reads which stage is actually driving missed windows; the budget argument that survives the review uses this attribution rather than the headline breach count.
5. Emergency-change utilisation rate
Per-period count of emergency-change deployments against KEV and critical findings. Reads whether the emergency-change path is policy-active or paper-only. Programmes that report zero emergency-change utilisation in a window with non-zero KEV findings have a structural gap between policy and practice that the audit committee will read.
Framework references for patch cycle and remediation SLA
The frameworks below name the SLA windows the patch cycle has to clear and the patch-management practices the operating side has to maintain. The framework anchor is what makes the cadence argument defensible: a programme arguing for emergency-change cadence against a 1.4 KEV-band lag anchored to BOD 22-01 has a different conversation with the audit committee than a programme arguing from internal precedent.
| Framework | Cadence reference |
|---|---|
| CISA BOD 22-01 | Federal civilian agencies must remediate KEV findings within 14 days of inclusion. The KEV-band cumulative lag (availability plus change plus rescan) has to clear fourteen days for the benchmark to hold. |
| PCI DSS v4.0 Requirement 6.3.3 | Critical and high-risk vulnerabilities resolved within one month of identification. The critical-band cumulative lag has to clear thirty days, with the patch-availability stage frequently consuming half the window. |
| ISO 27001 Annex A 8.8 | Information about technical vulnerabilities obtained in a timely fashion, exposure evaluated, appropriate measures taken. The cadence is programme-defined and risk-justified, which makes the per-stage lag breakdown the auditor-readable artefact. |
| NIST SP 800-40 Rev. 4 | Enterprise patch management planning reference; specifies how the patch cadence operates across asset classes and how patch metadata is maintained. Pairs to the SLA-side anchors below. |
| NIST SP 800-53 SI-2 | Flaw remediation control; covers the closure-side discipline. SI-2 paired with RA-5 (vulnerability monitoring and scanning) names the detection-plus-remediation operating commitment the cadence argument operationalises. |
| SOC 2 CC7.1 | Detection of vulnerabilities through ongoing monitoring; auditors test the cadence and the remediation cycle. Per-stage lag breakdown is in scope for the closure side of the control. |
| NIST CSF 2.0 Detect, Respond, Recover | Detect captures the inflow expectation; Respond captures the closure-side expectation; Recover captures the validation discipline. The three functions pair across the patch-cycle stages. |
| CIS Controls v8 Control 7 | Continuous Vulnerability Management names a monthly scan cadence as a baseline expectation; tighter cadence is risk-justified. The validation-rescan lag is gated by this scan cadence. |
| ITIL 4 Change Enablement | Reference for the change-management practice the patch cycle operates inside. Names normal, standard, and emergency change classifications; emergency-change utilisation is the lever for KEV-band cadence alignment. |
A programme that names patch-availability lag, change-window lag, and validation-rescan lag against the framework anchors above, plus the SLA-breach attribution and emergency-change utilisation rate, answers the cadence question for every framework that the audit committee or regulator is likely to apply, in the same record rather than as separate documents per framework.1,3,4,5,6,7,8,9,15,16
When the lag is wrong: three diagnostic patterns
A cumulative lag near or above the SLA window is not always a capacity problem. Three diagnostic patterns help distinguish vendor risk from change-cadence gap from validation-rescan misalignment.
1. Vendor risk: availability lag concentrated in one or two vendors
When the patch availability lag breakdown shows long lag for a small number of vendors and short lag for the rest, the issue is vendor-side rather than internal-capacity-side. The defensible response is in the third-party risk record (escalation to vendor management, contractual SLA negotiation, alternative-product evaluation, compensating-control posture by default for that vendor) rather than in the security capacity record. The third-party vendor risk assessment guide covers the supplier-side discipline that pairs with the patch-cycle metric.
2. Change-cadence gap: change-window lag dominates
When the patch availability lag is short across vendors but the change-window lag is long, the bottleneck is the change-management cadence, not vendor risk. The intervention is emergency-change path activation for KEV-band findings, change-cadence retune for the critical band, and named change-window escalation policy that does not require ad hoc CAB meetings. The fix is on the operating side of change management, not on the security-capacity side.
3. Validation misalignment: rescan lag dominates
When patch availability and change-window lag are both short but validation rescan lag is long, the bottleneck is scanner cadence and rescan triggering. The intervention is targeted post-deployment rescan inside the SLA window, rescan-cadence-by-asset-class retune, and evidence-attachment automation that shortens the closure event tail. Scanner cadence is the lever; capacity is not. The scan scheduling and baseline cadence guide covers the cadence-by-asset-class decision that drives validation-rescan lag.
How the engagement record carries patch cycle and SLA
Patch-cycle-versus-SLA numbers get cleaner when the SLA timer, the patch decision, the change-window reference, the deployed-version evidence, and the validation rescan event live on the same engagement record the operational work lives on, rather than on a metrics layer reconstructed from change-management spreadsheets, scanner exports, and audit-week extracts. The platform does not run the change advisory board, deploy patches, or set the maintenance windows for the programme. It does make the three-lag analysis reproducible from the live record at any moment between reporting cycles.
SecPortal pairs every finding to a versioned engagement record through findings management. CVSS 3.1 vector, severity band, owner, evidence, remediation status, and patch-decision metadata are captured on the finding record so the per-severity-band lag analysis is one query against the same place the work is done.24 The activity log captures the timestamped chain of state changes by user with retention by plan, so the elapsed time between any two lifecycle events (detected, patch-available, change-window-scheduled, deployed, rescan-passed) is a query against the live record rather than a reconstruction from email threads.25
The continuous monitoring feature schedules external, authenticated, and code scans on daily, weekly, biweekly, or monthly cadences, so the validation-rescan cadence is observable from the schedule decision rather than inferred from the dashboard.27 A targeted post-deployment rescan can be run inside the SLA window so the closure event lands on the same record as the change ticket reconciliation. The authenticated scanning feature runs against the surface that requires credentials so the validation evidence does not silently degrade when authentication fails. The scanner rate limiting guide covers the operational discipline that keeps the rescan cadence predictable rather than spiky.
The document management feature holds the patch policy, the change-management interface, the compensating-control register, and vendor advisory copies under change control so the operating documents that anchor the cadence are versioned and auditable.28 The compliance tracking feature maps findings and controls to ISO 27001, SOC 2, Cyber Essentials, PCI DSS, and NIST frameworks with CSV export, so the per-severity-band lag against the framework SLA window is reproducible against the same record.26 The AI report generation workflow produces remediation roadmaps and cadence narratives from the same engagement data, so the leadership read of the patch-cycle question matches the operational read.29
The patch management coordination workflow, remediation tracking workflow, and retesting workflow keep the patch decision, the change-window reference, the deployed-version evidence, the SLA window, and the validation rescan event on the same engagement record. The platform does not replace the change-management system or the IT patching tool; it does pair the security-side finding record to the change-side ticket reference so the reconciliation is a continuous record rather than a periodic project.30
For internal security and vulnerability management teams
Internal security teams and vulnerability management leads carry the cadence question between audits. The pattern that survives reporting cycle after reporting cycle is to operate per-stage lag discipline on the same record, capture lifecycle transitions as a side effect of the work rather than as a separate metrics project, and keep the per-vendor patch-availability lag visible alongside the change-window and validation-rescan lag.
- Track patch-availability lag per severity band per vendor so vendor risk is visible distinct from internal capacity gap.
- Track change-window lag per severity band so emergency-change utilisation is policy-active rather than paper-only.
- Track validation-rescan lag per severity band so closure events land inside the SLA window rather than on the next scheduled scan cycle.
- Define closure as retest-passed only and read the reopen rate as the paired durability metric.
- Anchor the per-stage lag to external framework references (CISA BOD 22-01, PCI DSS 6.3.3, ISO 27001 Annex A 8.8, NIST SP 800-40, NIST SP 800-53 SI-2) rather than to internal precedent.
- Document compensating controls with the specific rule, segment, or query reference on the finding record so residual exposure during the queue is named rather than implied.
- Pair the change ticket and the finding record on a single live record so the reconciliation is continuous rather than periodic.
- Pull the per-stage lag breakdown before arguing capacity, so the budget conversation moves from anecdote to evidence.
For internal security teams, vulnerability management teams, AppSec teams, and security engineering teams, the operating commitment is to keep the patch-cycle-versus-SLA cadence reproducible from the live record at any moment in the reporting cycle, not only at quarterly review week.
For security leadership and audit committees
Security leaders and audit committees read the cadence question through a different lens than operational teams. The leadership read is whether the operating cadence on the IT side is structurally compatible with the SLA cadence the security side has committed to, not only whether last quarter cleared the headline closure number. A programme that hits the SLA on closures while accumulating change-window slip, silent vendor lag, or rescan-cadence misalignment is technically meeting its commitment and substantively exposing residual risk. The leadership question is which of those two pictures the lag analysis is actually telling.
- Track the three lags (patch availability, change window, validation rescan) per severity band as separate trend lines rather than as one composite score.
- Read the direction of each trend over twelve months as a programme health signal independent of in-period values.
- Surface vendor-side patch-availability lag as a third-party risk record alongside the security-side metrics, not separate from them.
- Ask for the per-stage SLA-breach attribution when the headline closure rate is healthy but the framework window is being missed; the attribution shows which lever to pull.
- Tie the cadence numbers to the same engagement record the audit evidence comes from so the leadership read and the audit read are the same record rather than two reports.
The leadership-side platform discipline that supports this is covered on SecPortal for CISOs and security leaders and security operations leaders. The vulnerability remediation throughput research covers the closure-side cycle-time discipline that the validation-rescan lag rolls up to; the ingest-versus-capacity research covers the queue-level inflow-versus-outflow ratio the cadence argument sits inside; and the MTTD vs MTTR research covers the per-finding lifecycle clocks the cadence frames roll up from.31,32
The security leadership reporting workflow keeps the lag metrics, the per-vendor breakdown, the change-window utilisation, and the framework crosswalks on the same record so the audit-committee report and the engineering-leader report draw from one source of truth.
Conclusion
Patch cycle and remediation SLA are linked operating cadences, not standalone numbers. The patch cycle is gated by vendor advisory schedules, change-window cadence, dependency-graph fragility, and validation rescan timing; the SLA is gated by external framework anchors and internal policy. Reporting only the headline closure rate hides which cadence the programme is actually carrying when the framework window is missed. The defensible discipline is per-stage lag against documented anchors (patch availability, change window, validation rescan), per-vendor breakdown for vendor risk attribution, named compensating-control posture on every finding, and emergency-change utilisation rate, all sitting on the same engagement record so the leadership read and the operational read match.1,3,4,5,6,7,8,9,15,16
Treating the patch-cycle-versus-SLA mismatch as a property of the live engagement record rather than as a metrics layer reconstructed from change-management spreadsheets and scanner exports is the highest-leverage discipline in vulnerability programme reporting between audits. It keeps the cadence argument on evidence rather than anecdote, it surfaces the bottleneck stage early enough to act inside the SLA window, and it makes the budget conversation about vendor risk, change cadence, scanner cadence, and rescan capacity argued from the same record as the audit conversation about SLA performance. The platform does not have to run the change advisory board for the programme. It does have to make the three-lag analysis reproducible and the lifecycle chain self-documenting.
Frequently Asked Questions
Sources
- CISA, Binding Operational Directive 22-01: Reducing the Significant Risk of Known Exploited Vulnerabilities
- CISA, Known Exploited Vulnerabilities Catalog
- PCI Security Standards Council, PCI DSS v4.0 Requirement 6.3.3
- ISO/IEC, ISO 27001:2022 Annex A 8.8 Management of Technical Vulnerabilities
- NIST, SP 800-40 Rev. 4: Guide to Enterprise Patch Management Planning
- NIST, SP 800-53 Revision 5: SI-2 Flaw Remediation
- NIST, SP 800-53 Revision 5: RA-5 Vulnerability Monitoring and Scanning
- AICPA, SOC 2 Trust Services Criteria CC7.1 Detection of Vulnerabilities
- NIST, Cybersecurity Framework (CSF) 2.0 Detect, Respond, and Recover Functions
- CISA, Stakeholder-Specific Vulnerability Categorization (SSVC)
- FIRST, EPSS Exploit Prediction Scoring System Documentation
- NIST, NVD National Vulnerability Database
- NCSC, Vulnerability Management Guidance
- OWASP, Vulnerability Management Guide
- CIS, CIS Controls v8: Control 7 Continuous Vulnerability Management
- ITIL 4, Change Enablement Practice Reference
- Microsoft Security Response Center, Security Update Guide
- Oracle, Critical Patch Updates and Security Alerts
- Cisco PSIRT, Security Advisories
- OASIS, Common Security Advisory Framework (CSAF)
- ENISA, Coordinated Vulnerability Disclosure Policies in the EU
- OWASP, Software Assurance Maturity Model (SAMM)
- BSIMM, Building Security In Maturity Model
- SecPortal, Findings & Vulnerability Management
- SecPortal, Activity Log & Workspace Audit Trail
- SecPortal, Compliance Tracking
- SecPortal, Continuous Monitoring
- SecPortal, Document Management
- SecPortal, AI-Powered Security Reports
- SecPortal, Patch Management Coordination Use Case
- SecPortal Research, Vulnerability Remediation Throughput
- SecPortal Research, Ingest vs Remediation Capacity
Run patch cycle and SLA on the same engagement record
SecPortal keeps findings, scan cadence, retests, exceptions, change references, and SLA mappings paired to one versioned engagement record so the patch-availability lag, the change-window lag, and the validation-rescan lag are reproducible at any moment between reporting cycles.