Vulnerability Remediation SLA Policy Template one signed document for severity windows, escalation, and audit review
A free, copy-ready vulnerability remediation SLA policy template. Twelve structured sections covering policy purpose and scope, roles and responsibilities, severity definitions and source, severity-to-window table per asset tier, clock-start rule with re-discovery handling, defensible stop-the-clock conditions, percentage-threshold escalation ladder, exception path with residual-band approver authority, reporting cadence and metrics, governance review cadence, policy revision and version control, and signatures with stakeholder acknowledgement. Aligned with ISO/IEC 27001 Annex A 8.8 and Clause 5.3, NIST SP 800-53 RA-5 and SI-2, NIST SP 800-40 Rev. 4, PCI DSS Requirement 6.3 and 11.3, SOC 2 CC7.1, CISA Binding Operational Directive 22-01, and the standard expectations across HIPAA, NIS2, DORA, and FedRAMP.
Run the policy against the live record, not against a separate metrics layer
SecPortal captures findings, cycle-time stages, retests, exceptions, and SLA evidence on one engagement record so the SLA-bound closure rate, the breach evidence, and the exception register are one query rather than a reconstruction. Free plan available.
No credit card required. Free plan available forever.
Twelve sections that turn an SLA promise into a defensible policy
A vulnerability remediation SLA policy is the document the security or vulnerability management function publishes to declare the time windows the programme commits to for closing findings at each severity, the conditions that pause the clock, the exception path when the window is not achievable, and the governance cadence that reviews performance. The twelve sections below cover the durable shape of the artefact across ISO/IEC 27001 Annex A 8.8, NIST SP 800-53 RA-5 and SI-2, NIST SP 800-40 Rev. 4, PCI DSS Requirement 6.3 and 11.3, SOC 2 CC7.1, and CISA Binding Operational Directive 22-01. Copy the section that fits your stage and paste the rest as you go.
The policy is not a substitute for the SLA calculator that derives the windows from a programme profile, the per-finding worksheet that tracks one item, or the exception register that aggregates accepted risk. Pair it with the vulnerability management policy template for the umbrella programme document this SLA policy operationalises, the vulnerability remediation SLA calculator to derive the windows, the vulnerability remediation worksheet for the per-finding work, the security exception register template for the aggregate ledger of accepted risk, the risk acceptance form template for the per-decision artefact behind every exception, and the audit evidence retention policy template for the upstream rule that names how long the SLA breach evidence, the exception records, the retest reports, and the certificate-of-disposition trail this policy generates have to be retained, and under what hold and disposition discipline.
Copy the full policy (all twelve sections) as one block.
1. Policy purpose, scope, and authority
Open the policy with the boundary and the authority. A reviewer should know in the first paragraph which estate the rule applies to, which programme the rule belongs to, and which executive authority signed it. ISO/IEC 27001 Clause 5.2 and Clause 5.3 expect documented information security policies with named authority; this opening section is what makes the SLA policy traceable to the wider ISMS rather than a stand-alone document.
Policy title: Vulnerability Remediation SLA Policy
Policy version: {{POLICY_VERSION}}
Effective date: {{EFFECTIVE_DATE}}
Last review date: {{LAST_REVIEW_DATE}}
Next review date: {{NEXT_REVIEW_DATE}}
Purpose:
{{PLAIN_LANGUAGE_PURPOSE_PARAGRAPH}}
In scope:
- Asset classes (production applications, internal applications, infrastructure, cloud workloads, networking, endpoints, mobile applications, APIs, third-party SaaS): {{IN_SCOPE_ASSETS}}
- Environments (production, pre-production, staging, development, sandbox): {{IN_SCOPE_ENVIRONMENTS}}
- Geographies and business units: {{IN_SCOPE_BUSINESS_UNITS}}
- Source systems for findings (authenticated scan, external scan, code scan, manual review, third-party pentest, bug bounty, internal report, regulatory disclosure): {{IN_SCOPE_FINDING_SOURCES}}
Out of scope:
{{OUT_OF_SCOPE_BOUNDARIES}}
Frameworks the policy evidences (ISO 27001 Annex A 8.8, NIST SP 800-53 RA-5 and SI-2, NIST SP 800-40 Rev. 4, PCI DSS Requirement 6.3 and 11.3, SOC 2 CC7.1, CISA BOD 22-01, HIPAA, NIS2, DORA, FedRAMP, internal policy): {{FRAMEWORK_LIST}}
Approving authority: {{APPROVING_AUTHORITY_NAME_AND_ROLE}}
Approval date: {{APPROVAL_DATE}}
2. Roles and responsibilities
Name the people who carry the rule. Policies that float without named owners drift the moment the original author moves teams. ISO/IEC 27001 Clause 5.3 expects roles and authorities for the information security management system to be documented; this section is the discrete artefact that meets that expectation for the SLA policy.
Policy owner (security or GRC function leader; maintains the document, schedules review cadence, signs off on revisions):
- Name: {{POLICY_OWNER_NAME}}
- Role: {{POLICY_OWNER_ROLE}}
- Function: {{POLICY_OWNER_FUNCTION}}
Vulnerability management owner (programme leader; runs the operational discipline against the policy and reports performance):
- Name: {{VM_OWNER_NAME}}
- Role: {{VM_OWNER_ROLE}}
- Function: {{VM_OWNER_FUNCTION}}
Remediation owners (named per business unit; carry the engineering work and the SLA windows on findings routed to them):
- Identification rule (how a finding is routed to a named remediation owner): {{REMEDIATION_OWNER_ROUTING_RULE}}
Retest owner (security function leader who confirms remediation closes the finding and the SLA was met):
- Name: {{RETEST_OWNER_NAME}}
- Role: {{RETEST_OWNER_ROLE}}
Governance approver (executive authority who signs the policy and material revisions):
- Name: {{GOVERNANCE_APPROVER_NAME}}
- Role: {{GOVERNANCE_APPROVER_ROLE}}
Audit committee reporting recipient: {{AUDIT_COMMITTEE_REPORTING_RECIPIENT}}
3. Severity definitions and source
Every SLA target attaches to a severity, so the severity definition is load-bearing. Pin severity to an external standard rather than internal precedent so the audit read is unambiguous. CVSS 3.1 is the durable default; CISA KEV and EPSS adjust the picture but do not replace the base score.
Primary severity source: CVSS 3.1 base score
- Critical: CVSS 9.0 to 10.0
- High: CVSS 7.0 to 8.9
- Medium: CVSS 4.0 to 6.9
- Low: CVSS 0.1 to 3.9
- Informational: CVSS 0.0
Severity adjustment rules:
- CISA Known Exploited Vulnerabilities (KEV) listing escalates the finding one severity band (Critical caps the ladder).
- Public exploit availability (Metasploit module, public proof-of-concept) escalates the finding one severity band on internet-facing assets.
- EPSS probability above 0.10 is recorded on the finding but does not adjust severity by default.
- Environmental modifiers (modified attack vector, modified privileges, environmental impact) may adjust the temporal or environmental score; the rule has to be applied consistently across the estate.
Manual override path:
- Severity may be increased or decreased by the security function with documented rationale on the finding record. The rationale is reviewed at the governance review cadence.
- Manual overrides are tracked in aggregate; an override rate above {{OVERRIDE_RATE_THRESHOLD}} percent triggers a programme review of severity calibration.
Asset criticality multiplier (applied to the SLA window in Section 4):
- Tier 1 (production internet-facing or regulated-data assets): multiplier 1.0 (tightest window).
- Tier 2 (production internal-only or non-regulated): multiplier 1.0 to 1.5.
- Tier 3 (pre-production, staging): multiplier 1.5 to 2.0.
- Tier 4 (development, sandbox): multiplier 2.0 or excluded from policy.
Reference standards: NIST SP 800-30 Rev. 1 (risk assessment), CVSS 3.1 specification, CISA Stakeholder-Specific Vulnerability Categorization (SSVC) where applicable, FIRST EPSS documentation.
4. Severity-to-window table
The headline of the policy. The windows are the rule the rest of the document operationalises. Anchor the windows to external references (PCI DSS Requirement 6.3.3, CISA BOD 22-01) rather than internal precedent so the audit read is defensible. Pair every SLA with an internal SLO that gives engineering teams a buffer before the breach.
Definitions:
- SLA (Service Level Agreement): the published commitment to leadership and audit. A breach is a logged event with a documented response.
- SLO (Service Level Objective): the internal target the programme operates against; SLO < SLA so the buffer absorbs operational variance.
Default windows (apply asset criticality multiplier from Section 3):
Tier 1 - production internet-facing or regulated-data assets
- Critical residual (CVSS 9.0-10.0, or KEV listing): SLO 14 days / SLA 30 days (faster track for KEV: SLO 7 days / SLA 14 days, anchored to CISA BOD 22-01).
- High residual (CVSS 7.0-8.9): SLO 30 days / SLA 60 days (PCI DSS scope: SLA 30 days, aligned to PCI DSS Requirement 6.3.3).
- Medium residual (CVSS 4.0-6.9): SLO 60 days / SLA 90 days.
- Low residual (CVSS 0.1-3.9): SLO 90 days / SLA 180 days, or next routine maintenance window if sooner.
Tier 2 - production internal-only or non-regulated
- Critical residual: SLO 21 days / SLA 45 days.
- High residual: SLO 45 days / SLA 90 days.
- Medium residual: SLO 90 days / SLA 180 days.
- Low residual: next routine maintenance window or 180 days.
Tier 3 - pre-production and staging
- Critical residual: SLO 30 days / SLA 60 days.
- High residual: SLO 60 days / SLA 120 days.
- Medium and low: best effort, recorded on finding.
Tier 4 - development and sandbox
- Recorded on finding for visibility; outside the SLA policy.
Programme profile selection:
- Standard programme: Tier 1 default.
- PCI DSS scope: Tier 1 with high-residual SLA at 30 days.
- ISO 27001 attestation: Tier 1 default plus annex evidence.
- SOC 2 attestation: Tier 1 default plus continuous detection evidence.
- CISA KEV-prioritised programme: KEV findings on Tier 1 assets at 14-day SLA.
Custom adjustments to the table require governance approval and a documented rationale on the policy revision record.
5. Clock-start rule
One of the most consequential design choices in the policy. Ambiguity here is the most common reason audit findings disagree with internal performance reports. Name the choice explicitly; do not let the rule live in operational practice and fall out of the document.
Clock-start rule (select one and document explicitly):
[ ] Option A: Clock starts at discovery
The SLA timer starts at the timestamp the finding was first surfaced (scanner output, pentest report submission, bug bounty submission, internal disclosure). Most defensible to external auditors because it does not depend on internal triage capacity.
[ ] Option B: Clock starts at triage acceptance
The SLA timer starts at the timestamp the security function confirmed the finding is not a duplicate, false positive, or out-of-scope item. More realistic to engineering teams; less defensible to external auditors. Requires a separate internal triage SLA so triage drag is observable.
[ ] Option C: Clock starts at ownership assignment
The SLA timer starts at the timestamp the finding was routed to a named remediation owner. Most operationally honest; least defensible to external auditors. Requires separate triage and routing SLAs to be visible.
Selected rule: {{SELECTED_CLOCK_START_RULE}}
Internal triage SLA (when clock-start is Option B or C):
- Triage cycle time target: {{TRIAGE_CYCLE_TIME_TARGET}}
- Triage cycle time SLA: {{TRIAGE_CYCLE_TIME_SLA}}
- Triage breach reporting cadence: {{TRIAGE_BREACH_REPORTING_CADENCE}}
Internal ownership-assignment SLA (when clock-start is Option C):
- Routing cycle time target: {{ROUTING_CYCLE_TIME_TARGET}}
- Routing cycle time SLA: {{ROUTING_CYCLE_TIME_SLA}}
- Routing breach reporting cadence: {{ROUTING_BREACH_REPORTING_CADENCE}}
Re-discovery rule:
- A finding closed and re-opened by a subsequent scan starts a fresh SLA clock at the re-open timestamp.
- A finding closed and re-discovered with material change to the underlying weakness starts a fresh SLA clock as a new finding.
- A finding closed and re-discovered without material change is logged as a re-open against the original finding identifier; the re-open count is reported in the governance review.
6. Stop-the-clock conditions
Stop-the-clock pauses the SLA window when a documented external dependency genuinely blocks remediation. Conditions that are NOT defensible (engineering capacity, sprint commitments, internal disagreement on severity) belong on the exception path in Section 8, not on stop-the-clock. Auditors read both lists and challenge the difference.
Defensible stop-the-clock conditions (select all the policy admits):
[ ] Vendor patch unavailable
Documented case open with the vendor; vendor case number recorded on the finding; vendor advisory cited; resume condition is vendor patch availability or vendor end-of-life decision.
[ ] Change-window restriction
Documented production change-window policy that prevents deployment inside the SLA; next available change window named on the finding; resume condition is change-window opening.
[ ] Evidence-collection delay outside the security function
Forensic preservation, regulatory hold, legal review; named external authority and case reference recorded; resume condition is the named authority releasing the asset.
[ ] Active incident declared on the affected asset
Incident response process supersedes the SLA; incident reference number recorded; resume condition is incident closure.
[ ] Third-party dependency the firm does not control
Upstream vendor or service provider whose remediation the firm cannot force; vendor name and ticket reference recorded; resume condition is upstream remediation.
[ ] Compensating-control deployment in lieu of patch
A compensating control is deployed and routed through the exception path in Section 8; the SLA clock pauses while the exception is active; resume condition is exception closure or expiry.
Conditions that are NOT defensible stop-the-clock:
- Engineering capacity, sprint commitments, release-train scheduling.
- Internal disagreement on severity (route through severity override in Section 3).
- Pending business decision on retire-versus-fix (route through exception path in Section 8).
- Pending budget approval for engineering work.
Stop-the-clock evidence requirements per finding:
- Stop-the-clock category recorded on the finding.
- External reference (vendor case, change-window ticket, regulatory hold reference, incident reference) recorded.
- Resume condition recorded.
- Stop-the-clock approver named (vulnerability management owner or above).
- Stop-the-clock duration tracked; durations above {{STOP_CLOCK_REVIEW_THRESHOLD}} days reviewed at the governance cadence.
Reporting:
- Stop-the-clock count and aggregate duration reported in the governance review.
- Stop-the-clock rate above {{STOP_CLOCK_RATE_THRESHOLD}} percent of open findings triggers a programme review.
7. Escalation ladder
The escalation ladder surfaces the SLA risk at multiple points before the breach so the warning lands before the audit reconstruction. Trigger at percentage thresholds of the window so the ladder works the same way for a 14-day window and a 180-day window. Each step has a named recipient, a delivery channel, and an evidence requirement.
Escalation thresholds (apply per-finding):
50 percent of SLA window
- Recipient: remediation owner.
- Channel: notification on finding record.
- Evidence: notification logged on the finding activity log.
- Action expected: progress update on the finding record.
75 percent of SLA window
- Recipient: remediation owner and security business partner.
- Channel: notification on finding record plus email to the business partner.
- Evidence: status update logged on the finding activity log.
- Action expected: documented remediation plan with an internal target date.
100 percent of SLA window (breach)
- Recipient: remediation owner, remediation owner manager, vulnerability management owner.
- Channel: notification on finding record, email to manager, breach event logged.
- Evidence: breach event recorded on the finding activity log; finding moves to overdue status.
- Action expected: documented breach response (remediation accelerated, exception filed, or compensating control approved).
150 percent of SLA window (overdue, accumulating)
- Recipient: head of security, business unit owner, vulnerability management owner.
- Channel: programme escalation event; reviewed at the next governance review.
- Evidence: programme escalation logged.
- Action expected: exception routed through Section 8 if remediation is not feasible at the residual rating.
200 percent of SLA window or programme-defined critical age
- Recipient: audit committee at the next reporting cycle.
- Channel: governance reporting cadence.
- Evidence: audit committee notification logged.
- Action expected: executive risk acceptance, escalated remediation, or programme-level review.
Reporting:
- Breach count and breach rate per severity band reported at every governance review.
- Breach reasons aggregated and reported (capacity, dependency, exception-pending, other).
- Repeat-breach owners (more than {{REPEAT_BREACH_THRESHOLD}} breaches in the period) reviewed individually with the business partner.
8. Exception path
Every SLA policy needs an explicit exception path that names how a finding moves from the SLA window into the exception register, who approves at each residual rating, and what evidence the exception requires. The exception path is what keeps the SLA from becoming a fiction the engineering organisation routinely misses.
When an exception is the right answer:
- Remediation is not feasible inside the SLA window for documented technical or business reasons.
- The reason is not engineering capacity (capacity issues are programme-level escalations, not exceptions).
- A compensating control reduces residual risk to a level the business is willing to carry.
Exception approver authority (by residual rating):
- Low residual: security manager or equivalent.
- Medium residual: head of security or equivalent.
- High residual: CISO or risk committee.
- Critical residual: CISO and executive sponsor.
Required evidence per exception:
- Linked finding identifier (canonical record).
- Plain-language risk summary.
- Original CVSS 3.1 vector and base score.
- Compensating controls in place with control reference, owner, verification method, and failure mode.
- Residual likelihood and residual impact after compensating controls.
- Named risk owner and named security approver.
- Hard expiry date (default ladder: critical residual 6 months, high 12 months, medium 12-24 months, low 24 months).
- Trigger conditions that invalidate the exception inside the calendar window.
- Lifecycle audit trail.
Exception register:
- All approved exceptions are recorded in the security exception register; the register is reviewed at the cadence in Section 10.
- Exceptions are closed by remediation, renewal with fresh approval, escalation to executive risk acceptance, or cancellation. Silent expiry extension is not a closure.
- Exception count by residual rating is reported at every governance review.
Reference: see the dedicated security exception register template for the parent ledger that aggregates all approved exceptions.
9. Reporting cadence and metrics
Metrics turn the policy into observable performance. Pair every metric with a frequency and a recipient so the reporting trail does not depend on memory. Audit committees read metrics that have stable definitions across reporting cycles; redefining metrics mid-cycle is the most common reason a programme loses the trend line.
Operational metrics (reported monthly to vulnerability management owner and head of security):
- SLA-bound closure rate per severity band, per asset tier.
- Aged-queue distribution (count of open findings past 30, 60, 90, 180 days).
- Inflow per period (findings raised) versus closure per period (findings closed).
- Exception count by residual rating, broken into not-yet-expired, expired, and renewed.
- Compensating-control count, with median age.
- Stop-the-clock count and aggregate duration, by category.
- Breach count and breach rate per severity band.
- Re-open rate (findings closed and re-opened by a subsequent scan).
- Retest cycle time (closure to retest verification).
Governance metrics (reported quarterly to audit committee):
- All operational metrics, summarised over the rolling quarter.
- Trend lines on the rolling twelve months for each metric so direction is visible.
- Material breaches with reason aggregation.
- Exception register growth and closure trend.
- Policy-level deviations or revisions in the period.
Board metrics (reported quarterly or as material change requires):
- Headline SLA performance (closure rate, breach rate, exception count) per severity band.
- Twelve-month trend lines.
- Material policy revisions and the rationale.
Metric definition stability:
- All metrics are defined in this policy and revised only at policy revision points.
- A metric whose definition changes is flagged in the report so the trend line is interpreted with the change in mind.
- A metric whose denominator changes (asset estate growth, scope expansion) is reported with both the rate and the absolute count.
Distribution list:
- Operational reports: vulnerability management owner, head of security, business partners. Frequency: monthly.
- Governance reports: audit committee, CISO, risk committee. Frequency: quarterly.
- Board reports: board, executive sponsor. Frequency: quarterly or as material change requires.
10. Governance review cadence
Performance review answers whether the programme is hitting the windows the policy publishes. Policy review answers whether the underlying environment still matches the rule. Both have to land in the document trail; one without the other produces either a policy nobody operates against or a programme that operates against rules the audit will challenge.
Performance review (the programme is running against the policy):
- Frequency: monthly to vulnerability management owner; quarterly to audit committee; annually to full board.
- Inputs: operational metrics from Section 9, breach event log from Section 7, exception register from Section 8.
- Outputs: documented action register; performance trend report; escalations to the board if material.
- Recipient: audit committee receives the quarterly summary; full board receives the annual summary.
Policy review (the policy still matches the environment):
- Frequency: annual minimum; triggered by material change.
- Material change includes:
- New framework adoption (firm enters PCI DSS scope, achieves SOC 2 attestation, expands into a regulated geography).
- New regulatory expectation (CISA directive, sector-specific rule, updated framework version).
- Material asset estate change (new business unit, geography, tenant, product line).
- Material threat environment change (new exploit class, sustained campaign on the firm sector).
- Material delivery model change (new scanner, new integration, new tooling).
- Inputs: framework drift assessment; estate change record; threat environment briefing; programme performance trend.
- Outputs: policy revision draft; redline against prior version; revision rationale.
- Approver: governance approver from Section 2.
- Distribution: full distribution list rebroadcast on revision.
Audit committee questions the review answers:
- Is SLA performance trending up, flat, or down across the rolling twelve months.
- Are exception count and compensating-control count trending up, flat, or down.
- Are breach reasons concentrated (capacity, dependency, severity dispute) and are the structural fixes landing.
- Is policy review keeping pace with framework adoption and estate change.
- Is the policy operating against the right severity definitions and the right asset tiering.
Board questions the review answers:
- Is the SLA policy delivering durable risk reduction or accumulating exceptions.
- Is policy revision tracking framework drift and regulatory change.
- Are residual risks from exception accumulation and compensating-control accumulation visible to leadership.
11. Policy revision and version control
Versioning the policy is what makes the audit trail reproducible. A reviewer reading the current policy should be able to reconstruct what the rule was at any historical reporting cycle without depending on memory. Hold the revision history on the document so the trend lines in the metrics section can be read against the rules that were in force at each point.
Version control:
- Policy version number, effective date, last review date, next review date recorded at the top of every revision.
- Revision history table maintained on the policy with version, date, summary of change, approver, and rationale.
- Prior versions retained according to the document control schedule below.
Revision triggers (in addition to the annual review):
- Audit recommendation requires policy change.
- Framework or regulation update requires policy change.
- Programme performance review identifies a structural gap requiring rule change.
- Material estate change requires asset tier or scope change.
- Material threat change requires severity or window change.
Revision process:
1. Policy owner drafts revision; redline against current version.
2. Vulnerability management owner reviews operational impact.
3. Governance approver signs the revision.
4. Revised policy distributed to full distribution list.
5. Effective date set with appropriate notice (default 30 days; emergency revisions effective immediately with documented rationale).
6. Revision logged in the revision history table; prior version archived.
Document control:
- Storage location: {{POLICY_STORAGE_LOCATION}}
- Access: {{POLICY_ACCESS_CONTROLS}}
- Retention: {{POLICY_RETENTION_PERIOD}} (typically 7 years for financial-services and healthcare programmes; align to firm record retention policy).
- Distribution list: {{POLICY_DISTRIBUTION_LIST}}
- Related documents:
- Security exception register policy
- Risk acceptance policy
- Vulnerability management programme charter
- Asset criticality classification policy
- Information security management system (ISMS) statement of applicability
12. Signatures and acknowledgement
Sign the policy at publication and at every material revision. The signature trail is what makes the rule defensible at audit; an unsigned policy is treated as a draft regardless of how widely it is followed in practice. Acknowledgement from key stakeholders documents that the rule is known to the people who carry it.
Approval signatures (required at publication and at material revision):
Policy owner
- Name: {{POLICY_OWNER_SIGNATURE_NAME}}
- Role: {{POLICY_OWNER_SIGNATURE_ROLE}}
- Date: {{POLICY_OWNER_SIGNATURE_DATE}}
Vulnerability management owner
- Name: {{VM_OWNER_SIGNATURE_NAME}}
- Role: {{VM_OWNER_SIGNATURE_ROLE}}
- Date: {{VM_OWNER_SIGNATURE_DATE}}
Governance approver
- Name: {{GOVERNANCE_APPROVER_SIGNATURE_NAME}}
- Role: {{GOVERNANCE_APPROVER_SIGNATURE_ROLE}}
- Date: {{GOVERNANCE_APPROVER_SIGNATURE_DATE}}
Acknowledgement (recorded for stakeholder groups; renewed at material revision):
- Heads of business units carrying remediation responsibility: {{BUSINESS_UNIT_ACKNOWLEDGEMENTS}}
- Heads of platform and infrastructure teams: {{PLATFORM_TEAM_ACKNOWLEDGEMENTS}}
- Engineering leadership: {{ENGINEERING_LEADERSHIP_ACKNOWLEDGEMENTS}}
- GRC and compliance function: {{GRC_ACKNOWLEDGEMENTS}}
- Internal audit function: {{INTERNAL_AUDIT_ACKNOWLEDGEMENTS}}
Effective date once all approval signatures collected: {{EFFECTIVE_DATE_FINAL}}
Acknowledgement evidence:
- Signed acknowledgement records stored with the policy version.
- Acknowledgement renewal cadence: at every material revision plus annual training cycle.
- New stakeholder onboarding includes policy acknowledgement.
Six failure modes the policy has to design against
The SLA policy fails the audit read in recognisable patterns. Each failure has a structural fix that the template above is designed to enforce. Read this list before you customise the template so the customisation does not weaken the discipline that makes the policy defensible.
The policy lives as a slide rather than a signed document
A leadership briefing names the SLA windows verbally; a slide deck shows the table. There is no signed policy, no version history, no revision trail. The first audit question (show me the document that names this rule) cannot be answered. The fix is one signed policy, one version history, one approver per revision.
Clock-start is undefined and operational practice diverges from reporting
The policy says the SLA is 30 days but does not name when the clock starts. Operations starts the clock at ownership assignment; the audit assumes the clock started at discovery. The two reports disagree by weeks. The fix is to name the clock-start rule explicitly and run a separate triage SLA so the gap between discovery and assignment is observable rather than hidden.
Stop-the-clock used as a capacity escape hatch
Engineering capacity issues are recorded as stop-the-clock against vendor patch unavailable or change-window restrictions. The SLA performance looks healthy on paper because the clock paused; the audit reconstruction reveals the displacement. The fix is a strict stop-the-clock list that does not include capacity, and a clear exception path that captures the actual blocker.
Severity overrides flow without rationale
The vulnerability management function reduces severity from High to Medium on findings whose remediation will not fit the High-severity window. The override has no rationale recorded; the override rate climbs; the SLA performance looks better than the underlying picture. The fix is a documented severity override path with rationale on the finding and an aggregate override rate that triggers programme review when it crosses a threshold.
No SLO buffer between operational target and audit commitment
The policy publishes only the SLA. Engineering teams aim at the SLA, hit it on average, miss it on the tail. The breach rate sits at 30 to 40 percent because the operational target and the audit commitment are the same number with no buffer. The fix is to publish both an SLA and an SLO where SLO is meaningfully tighter; the SLO is what engineering operates against, the SLA is what the audit reads.
Escalation ladder fires only at breach
There is no warning at 50 or 75 percent of the window; the first signal anyone receives is the breach event itself. By that point the engineering owner has no time to respond and the breach is logged as a fait accompli. The fix is a percentage-threshold escalation ladder that surfaces the SLA risk before the breach so the warning lands when intervention is still possible.
Ten questions the quarterly governance review has to answer
Operational review keeps the programme on top of breaches and exception flow. Governance review answers whether the policy is delivering durable risk reduction or accumulating exceptions and breaches that the audit will read as policy drift. Run these ten questions at every quarterly review and capture the answers in the governance record.
1.What is the SLA-bound closure rate per severity band over the rolling twelve months, and is the trend line up, flat, or down.
2.How many findings breached SLA in the period, and what was the aggregate breach reason distribution (capacity, dependency, exception-pending, severity dispute, other).
3.What is the aged-queue distribution at the end of the period (count past 30, 60, 90, 180 days) and how does it compare to the prior period.
4.How many active exceptions are in the register, broken down by residual rating, and how many are approaching expiry within 60 days.
5.How many compensating controls are live, what is their median age, and how many would the policy retire if the underlying remediation shipped.
6.What is the aggregate stop-the-clock duration in the period, broken down by category, and are any single-finding durations approaching the review threshold.
7.How is the inflow-versus-closure ratio behaving per severity band, and is the policy on track to absorb the inflow inside the published windows.
8.How many manual severity overrides happened in the period, what was the override rate, and is calibration drifting away from CVSS plus environmental baseline.
9.How many findings re-opened in the period, what was the re-open rate, and is verification discipline adequate to support the published SLA.
10.Has any framework, regulation, or estate change in the period triggered a policy review, and is the review on schedule.
How the policy pairs with SecPortal
The template above is copy-ready as a standalone artefact. If your team already runs finding tracking, remediation, and compliance evidence on a workspace, the policy performance becomes the byproduct of the work rather than a separate metrics project. SecPortal pairs every finding to a versioned engagement record through findings management, so the SLA-bound closure rate, the aged-queue distribution, the exception count, and the breach evidence are one query against the same record rather than a reconstruction from spreadsheets.
The activity log captures the timestamped chain of state changes by user, so the elapsed time between discovery, triage acceptance, ownership assignment, fix deployment, retest passed, and closure is observable rather than asserted. The cycle-time stage breakdown is what makes the clock-start rule in Section 5 reproducible at any moment between audit cycles. The compliance tracking feature maps findings and controls to ISO 27001, SOC 2, Cyber Essentials, PCI DSS, and NIST frameworks with CSV export, so the policy performance can be sliced by framework when an auditor asks for the closure rate against a specific control set.
The continuous monitoring feature runs daily, weekly, biweekly, and monthly schedules so the inflow side of the policy is observable rather than asserted; the inflow-versus-closure ratio in Section 9 is a query against the live record rather than a spreadsheet aggregate. The team management feature carries the role-based access control that decides who can apply a severity override, who can record a stop-the-clock event, who can approve an exception at each residual rating, and who runs the governance review. The AI report generation workflow produces leadership summaries from the same engagement data so the audit committee read of SLA performance and the operational read are the same record rather than two independently-edited documents that diverge between reporting cycles.