Vulnerability Management Policy Template one signed document for programme charter, scope, identification, classification, routing, exceptions, and audit review
A free, copy-ready vulnerability management policy template. Twelve structured sections covering programme charter and authority, scope and asset coverage, roles and responsibilities, identification sources and cadence, classification and severity model, routing and ownership rules, remediation SLAs, exception governance, reporting cadence and metrics, governance review cadence, document control, and signatures with stakeholder acknowledgement. Aligned with ISO/IEC 27001 Annex A 8.8 and Clause 5.3, NIST SP 800-53 RA-5 and SI-2, NIST SP 800-40 Rev. 4, PCI DSS Requirement 6.3 and 11.3, SOC 2 CC7.1, HIPAA 45 CFR 164.308, NIS2 Article 21, DORA Article 5, and CISA Binding Operational Directive 22-01.
Run the policy against the live programme record, not against a separate report
SecPortal carries findings, owners, severity, evidence, exceptions, retests, and policy artefacts on one workspace so the audit read of programme performance and the operational read are the same record. Free plan available.
No credit card required. Free plan available forever.
Twelve sections that turn a vulnerability management programme into a defensible policy
A vulnerability management policy is the umbrella document a security or GRC function publishes to declare how the organisation identifies, classifies, prioritises, owns, remediates, and reports security weaknesses across the in-scope estate. It sits one level below the wider information security policy and one level above the operational SLA policy in the policy hierarchy. The twelve sections below cover the durable shape of the artefact across ISO/IEC 27001 Annex A 8.8 and Clause 5.3, NIST SP 800-53 RA-5 and SI-2, NIST SP 800-40 Rev. 4, PCI DSS Requirement 6.3 and 11.3, SOC 2 CC7.1, HIPAA 45 CFR 164.308, NIS2 Article 21, DORA Article 5, and CISA Binding Operational Directive 22-01. Copy the section that fits your stage and paste the rest as you go.
Copy the full policy (all twelve sections) as one block.
1. Programme charter and authority
Open the policy with the programme purpose and the executive authority. ISO/IEC 27001 Clause 5.2 expects a documented information security policy with named authority; the vulnerability management policy is the durable artefact one level below the umbrella ISMS policy. The charter answers the first audit question (what programme exists, who signed it) before the policy moves into operational rules.
Policy title: Vulnerability Management Policy
Policy version: {{POLICY_VERSION}}
Effective date: {{EFFECTIVE_DATE}}
Last review date: {{LAST_REVIEW_DATE}}
Next review date: {{NEXT_REVIEW_DATE}}
Programme purpose:
{{PLAIN_LANGUAGE_PURPOSE_PARAGRAPH}}
Programme objectives (the rule the audit reads against):
- Identify security weaknesses across the in-scope estate at the cadence and through the channels named in this policy.
- Classify weaknesses against a documented severity model so prioritisation is reproducible.
- Route every finding to a named remediation owner with a documented SLA.
- Govern exceptions where remediation is not feasible inside the SLA window through a documented approval ladder.
- Report performance to operational, governance, and board layers at the published cadence.
- Maintain audit-readable evidence of identification, classification, routing, remediation, retest, exception, and disposition for each finding.
Programme sponsor (executive authority that publishes the policy):
- Name: {{EXECUTIVE_SPONSOR_NAME}}
- Role: {{EXECUTIVE_SPONSOR_ROLE}}
Approving authority: {{APPROVING_AUTHORITY_NAME_AND_ROLE}}
Approval date: {{APPROVAL_DATE}}
Policy hierarchy:
- Parent policy: Information Security Policy / Information Security Management System.
- Child policies and procedures referenced from this policy:
- Vulnerability Remediation SLA Policy.
- Security Exception Register Policy.
- Risk Acceptance Policy.
- Audit Evidence Retention Policy.
- Vulnerability Disclosure Policy.
- Asset Criticality Classification Policy.
- Incident Response Plan.
Frameworks the policy evidences (ISO 27001 Annex A 8.8 and Clause 5.3, NIST SP 800-53 RA-5 and SI-2, NIST SP 800-40 Rev. 4, PCI DSS Requirement 6.3 and 11.3, SOC 2 CC7.1, HIPAA 45 CFR 164.308, NIS2 Article 21, DORA Article 5, CISA BOD 22-01, internal policy): {{FRAMEWORK_LIST}}
2. Scope and asset coverage
Scope is the most contested part of the policy at audit because ambiguity here means a finding source can silently fall outside the programme. Name the asset classes, environments, business units, and finding sources explicitly. Tier the estate so the policy applies the right cadence to the right asset rather than one rule across the full footprint.
In-scope asset classes (carry the full policy):
- Production applications (internet-facing and internal).
- APIs (public, partner, internal).
- Infrastructure (servers, virtual machines, containers, kubernetes clusters, networking).
- Cloud workloads (compute, storage, identity, serverless, managed services).
- Endpoints (workstations, laptops, mobile devices issued by the firm).
- Mobile applications (firm-published applications on app stores).
- Third-party SaaS the firm operates as a tenant where the firm is the controller of the configuration.
- Source code repositories (production code, infrastructure-as-code, configuration-as-code).
- Vendor and supply-chain dependencies (commercial libraries, open-source dependencies, vendor-operated services where the firm carries residual risk).
- {{ADDITIONAL_IN_SCOPE_ASSETS}}
Out-of-scope or modified-scope asset classes:
- Personally-owned devices outside an MDM enrolment programme.
- Vendor systems where contract assigns vulnerability management to the vendor.
- Decommissioned assets logged in the asset retirement register (referenced from the asset decommissioning workflow).
- {{ADDITIONAL_OUT_OF_SCOPE_BOUNDARIES}}
Asset tiers (the policy applies cadence per tier):
- Tier 1: production internet-facing assets and regulated-data assets (PCI DSS scope, HIPAA scope, financial-services scope, FedRAMP boundary).
- Tier 2: production internal-only assets and non-regulated production.
- Tier 3: pre-production and staging environments.
- Tier 4: development and sandbox environments (typically excluded from the formal SLA).
Environments in scope: {{IN_SCOPE_ENVIRONMENTS}}
Geographies in scope: {{IN_SCOPE_GEOGRAPHIES}}
Business units in scope: {{IN_SCOPE_BUSINESS_UNITS}}
Asset criticality classification:
- Source: Asset Criticality Classification Policy (referenced).
- Review cadence: {{ASSET_CRITICALITY_REVIEW_CADENCE}}
- Classification owner: {{ASSET_CRITICALITY_OWNER}}
Finding sources in scope (every source feeds the canonical finding record):
- Authenticated scanning (with service account, credential rotation cadence, verified domain binding).
- External (unauthenticated) scanning of internet-facing surface.
- Code scanning (SAST, SCA, secret scanning) on source repositories.
- Manual review and threat modelling.
- Third-party penetration testing engagements.
- Red team and purple team engagements.
- Bug bounty submissions and external researcher disclosure.
- Vendor advisories, KEV listings, EPSS feeds, threat intelligence.
- Internal disclosure (engineer-reported, support-channel signals).
- Regulator notification, downstream-customer notification, supply-chain advisories.
Reference: Vulnerability Disclosure Policy carries the inbound researcher channel; this policy carries the inbound triage and routing.
3. Roles and responsibilities
Name the people who carry the rule. Policies that float without named owners drift the moment the original author moves teams. ISO/IEC 27001 Clause 5.3 expects roles and authorities for the information security management system to be documented; this section is the discrete artefact that meets that expectation for the vulnerability management programme.
Policy owner (security or GRC function leader; maintains the document, schedules review cadence, signs off on revisions):
- Name: {{POLICY_OWNER_NAME}}
- Role: {{POLICY_OWNER_ROLE}}
- Function: {{POLICY_OWNER_FUNCTION}}
Vulnerability management programme owner (operational leader; runs the discipline against the policy and reports performance):
- Name: {{VM_PROGRAMME_OWNER_NAME}}
- Role: {{VM_PROGRAMME_OWNER_ROLE}}
- Function: {{VM_PROGRAMME_OWNER_FUNCTION}}
Finding triage owner (security function lead who confirms a finding is canonical, deduplicated, classified, and routed):
- Name: {{TRIAGE_OWNER_NAME}}
- Role: {{TRIAGE_OWNER_ROLE}}
Remediation owners (named per business unit; carry the engineering work and the SLA windows on findings routed to them):
- Identification rule (how a finding is routed to a named remediation owner): {{REMEDIATION_OWNER_ROUTING_RULE}}
- Fallback rule (what happens when the asset record is incomplete): {{REMEDIATION_OWNER_FALLBACK_RULE}}
Retest owner (security function lead who confirms remediation closes the finding and the SLA was met):
- Name: {{RETEST_OWNER_NAME}}
- Role: {{RETEST_OWNER_ROLE}}
Exception approver ladder (by residual rating):
- Low residual: {{LOW_RESIDUAL_APPROVER}}
- Medium residual: {{MEDIUM_RESIDUAL_APPROVER}}
- High residual: {{HIGH_RESIDUAL_APPROVER}}
- Critical residual: {{CRITICAL_RESIDUAL_APPROVER}}
Governance approver (executive authority who signs the policy and material revisions):
- Name: {{GOVERNANCE_APPROVER_NAME}}
- Role: {{GOVERNANCE_APPROVER_ROLE}}
Audit committee reporting recipient: {{AUDIT_COMMITTEE_REPORTING_RECIPIENT}}
Risk committee reporting recipient: {{RISK_COMMITTEE_REPORTING_RECIPIENT}}
Stakeholder distribution at policy revision:
- Heads of business units carrying remediation responsibility.
- Heads of platform and infrastructure teams.
- Engineering leadership.
- GRC, compliance, and internal audit functions.
- Procurement and vendor management (for supply-chain coverage).
- {{ADDITIONAL_STAKEHOLDERS}}
4. Identification sources and cadence
Identification is the inflow side of the programme. Every source that surfaces findings has to feed the canonical record on a documented cadence, otherwise the policy operates against a partial view. NIST SP 800-53 RA-5, PCI DSS Requirement 11.3, and ISO 27001 Annex A 8.8 each expect a documented identification cadence; this section publishes how each source feeds the queue.
Authenticated scanning:
- Targets: Tier 1 and Tier 2 production assets, with documented service-account access.
- Cadence: {{AUTH_SCAN_CADENCE}} (typical: weekly to monthly per asset class).
- Credential rotation: per the credential rotation policy or per the scanner-credential-rotation runbook.
- Coverage evidence: scanner output, last-run timestamp, last-success timestamp, authentication-failure rate.
- Reference: Authenticated scanning runbook.
External (unauthenticated) scanning:
- Targets: full internet-facing surface, including subdomains discovered through asset attribution.
- Cadence: {{EXTERNAL_SCAN_CADENCE}} (typical: weekly to monthly; PCI DSS scope quarterly with ASV).
- Reference: External scanning runbook.
Code scanning (SAST, SCA, secret scanning):
- Targets: production source repositories, infrastructure-as-code, configuration-as-code.
- Cadence: per-commit on protected branches; nightly for the dependency tree.
- Reference: SDLC vulnerability handoff workflow.
Manual review and threat modelling:
- Targets: new product features, material architecture changes, regulated-data flows.
- Cadence: per programme cadence ({{THREAT_MODEL_CADENCE}}) and per material change.
- Reference: Threat model template.
Third-party penetration testing:
- Targets: per engagement scope, prioritised against asset criticality and regulatory expectation.
- Cadence: {{PENTEST_CADENCE}} (annual minimum; PCI DSS Requirement 11.4 sets the cadence inside scope).
- Reference: Penetration testing programme charter.
Bug bounty and external researcher disclosure:
- Targets: defined scope per the Vulnerability Disclosure Policy.
- Cadence: continuous (researcher-driven).
- Reference: Vulnerability Disclosure Policy.
Vendor advisories, KEV listings, EPSS feeds, threat intelligence:
- Cadence: continuous monitoring with escalation when an advisory affects an in-scope asset.
- Reference: Threat intelligence runbook.
Internal disclosure:
- Cadence: continuous (engineer-driven, support-driven).
- Reference: Internal disclosure intake.
Regulator and supply-chain notification:
- Cadence: as received.
- Reference: Regulator response runbook.
Coverage SLOs (the programme has to operate against published coverage targets):
- Authenticated scan coverage: {{AUTH_SCAN_COVERAGE_SLO}} percent of in-scope authenticated assets covered per cycle.
- External scan coverage: {{EXTERNAL_SCAN_COVERAGE_SLO}} percent of in-scope internet-facing assets covered per cycle.
- Code scan coverage: {{CODE_SCAN_COVERAGE_SLO}} percent of production repositories covered per cycle.
- Coverage gaps and credential failures are tracked alongside open findings; a missed scan cycle is reportable.
5. Classification and severity model
Classification turns identification into an actionable queue. The severity model has to be anchored to an external standard rather than internal precedent so the audit read is unambiguous. CVSS 3.1 is the durable default; KEV listings and EPSS scores adjust the picture but do not replace the base score.
Primary severity source: CVSS 3.1 base score
- Critical: CVSS 9.0 to 10.0
- High: CVSS 7.0 to 8.9
- Medium: CVSS 4.0 to 6.9
- Low: CVSS 0.1 to 3.9
- Informational: CVSS 0.0
Severity adjustment rules:
- CISA Known Exploited Vulnerabilities (KEV) listing escalates the finding one severity band (Critical caps the ladder) on internet-facing assets.
- Public exploit availability (Metasploit module, public proof-of-concept) escalates the finding one severity band on internet-facing assets.
- EPSS probability is recorded on the finding for prioritisation; an EPSS above {{EPSS_HIGH_PROBABILITY_THRESHOLD}} flags the finding for review at the next prioritisation cadence but does not adjust severity by default.
- Environmental modifiers (modified attack vector, modified privileges, environmental impact per CVSS 3.1) may adjust the temporal or environmental score; the rule has to be applied consistently across the estate.
Asset criticality multiplier (applied to the SLA window in Section 7):
- Tier 1: multiplier 1.0 (tightest window).
- Tier 2: multiplier 1.0 to 1.5.
- Tier 3: multiplier 1.5 to 2.0.
- Tier 4: multiplier 2.0 or excluded from policy.
Manual override path:
- Severity may be increased or decreased by the security function with documented rationale on the finding record.
- Manual overrides are tracked in aggregate; an override rate above {{OVERRIDE_RATE_THRESHOLD}} percent triggers a programme review of severity calibration.
Deduplication and canonical record rule:
- Each underlying weakness on a given asset is one canonical finding regardless of how many sources surface it.
- A finding closed and re-discovered without material change is logged as a re-open against the original finding identifier.
- A finding closed and re-discovered with material change to the underlying weakness starts a fresh finding.
- Duplicate-detection logic is published on the dedup runbook; deduplication accuracy is measured at the governance review.
Reference standards: NIST SP 800-30 Rev. 1 (risk assessment), CVSS 3.1 specification, CISA Stakeholder-Specific Vulnerability Categorization (SSVC) where applicable, FIRST EPSS documentation.
6. Routing and ownership rules
Routing is what makes ownership operationally real. A finding without a named remediation owner is a finding the programme cannot enforce a window against. The policy publishes the routing rule rather than the individuals so the rule survives organisational change.
Routing rule (how a finding is assigned to a named remediation owner):
- Step 1: read the affected asset from the finding record.
- Step 2: read the asset ownership mapping from the live asset register.
- Step 3: route the finding to the named remediation owner of the asset.
- Step 4: if the asset record is incomplete, route to the platform team and escalate the asset-mapping gap to the asset criticality owner.
- Step 5: notify the remediation owner through the channel named in the notifications policy.
Asset ownership mapping cadence:
- Reviewed: {{ASSET_OWNERSHIP_REVIEW_CADENCE}} (quarterly minimum recommended).
- Owner: asset criticality owner (named in Section 3 of this policy or in the Asset Criticality Classification Policy).
- Evidence: asset ownership register; ownership change log.
- Reference: Asset ownership mapping for findings workflow.
Routing rules for shared assets:
- Production asset shared across teams (platform plus product team): primary owner is the platform team; secondary owner is the product team for product-specific findings.
- Vendor asset where the firm carries residual risk: primary owner is the vendor manager; secondary owner is the security function for compensating controls.
- Multi-business-unit asset: routed by the asset criticality owner with rationale on the routing record.
Routing for findings without a clear asset:
- Default route: vulnerability management programme owner.
- Triage outcome: route to platform team or escalate as a programme-level finding.
Routing notification rules:
- Owner notification on routing: {{OWNER_NOTIFICATION_CHANNEL}} (typical: ticket, email, platform notification).
- Owner acknowledgement SLA: {{OWNER_ACKNOWLEDGEMENT_SLA}} (typical: business day from routing).
- Acknowledgement evidence: logged on finding activity record.
Routing audit trail:
- Every routing event is logged with timestamp, source, owner assigned, rationale.
- Re-routing events are logged with rationale and approver.
- Routing accuracy is reviewed at the governance cadence; persistent mis-routing surfaces the asset-mapping or rule gap.
7. Remediation SLAs
The remediation SLA is the time-window commitment that turns identification and routing into closure. This section either publishes the windows directly or references the dedicated SLA sub-policy. Either way, the umbrella policy makes the SLA visible to the audit and the leadership read.
SLA structure:
- Tier 1 production internet-facing or regulated-data assets carry the tightest cadence.
- Tier 2 production internal-only or non-regulated assets carry a meaningful but slightly relaxed cadence.
- Tier 3 pre-production and staging carry a best-effort cadence with documented evidence.
- Tier 4 development and sandbox are tracked for visibility but typically excluded from the formal SLA.
Default windows (apply asset criticality multiplier from Section 5):
Tier 1 - production internet-facing or regulated-data assets
- Critical residual: SLO 14 days / SLA 30 days. KEV-listed: SLO 7 days / SLA 14 days (CISA BOD 22-01 anchor).
- High residual: SLO 30 days / SLA 60 days. PCI DSS scope: SLA 30 days (PCI DSS 6.3.3).
- Medium residual: SLO 60 days / SLA 90 days.
- Low residual: SLO 90 days / SLA 180 days, or next routine maintenance window if sooner.
Tier 2 - production internal-only or non-regulated
- Critical: SLO 21 days / SLA 45 days.
- High: SLO 45 days / SLA 90 days.
- Medium: SLO 90 days / SLA 180 days.
- Low: next routine maintenance window or 180 days.
Tier 3 - pre-production and staging
- Critical: SLO 30 days / SLA 60 days.
- High: SLO 60 days / SLA 120 days.
- Medium and low: best effort, recorded on finding.
Tier 4 - development and sandbox
- Recorded on finding for visibility; outside the formal SLA.
Definitions:
- SLA (Service Level Agreement): the published commitment to leadership and audit; a breach is a logged event with a documented response.
- SLO (Service Level Objective): the internal target the programme operates against; SLO < SLA so the buffer absorbs operational variance.
Clock-start rule (select one and document explicitly):
[ ] Clock starts at discovery (most defensible to external auditors).
[ ] Clock starts at triage acceptance (more realistic operationally; requires separate triage SLA).
[ ] Clock starts at ownership assignment (most operationally honest; requires separate triage and routing SLAs).
Selected rule: {{SELECTED_CLOCK_START_RULE}}
Stop-the-clock conditions (defensible only):
[ ] Vendor patch unavailable (vendor case number recorded).
[ ] Change-window restriction (next change-window named).
[ ] Evidence-collection delay outside the security function (forensic preservation, regulatory hold, legal review).
[ ] Active incident declared on the affected asset.
[ ] Third-party dependency the firm does not control.
[ ] Compensating-control deployment in lieu of patch (routed through exception path).
Conditions that are NOT defensible stop-the-clock and route through Section 8 instead: engineering capacity, sprint commitments, internal disagreement on severity, pending business decision, pending budget approval.
Reference: For the full SLA structure, escalation ladder, and breach reporting cadence, see the dedicated Vulnerability Remediation SLA Policy referenced from this document. The umbrella VM policy publishes the headline windows; the SLA sub-policy publishes the operational detail.
8. Exception governance
Every VM policy needs an explicit exception path that names how a finding moves from the SLA window into the exception register, who approves at each residual rating, and what evidence the exception requires. The exception path is what keeps the SLA from becoming a fiction the engineering organisation routinely misses.
When an exception is the right answer:
- Remediation is not feasible inside the SLA window for documented technical or business reasons.
- The reason is not engineering capacity (capacity issues are programme-level escalations, not exceptions).
- A compensating control reduces residual risk to a level the business is willing to carry.
Exception approver authority (by residual rating):
- Low residual: security manager or equivalent.
- Medium residual: head of security or equivalent.
- High residual: CISO or risk committee.
- Critical residual: CISO and executive sponsor.
Required evidence per exception:
- Linked finding identifier (canonical record).
- Plain-language risk summary the executive approver can read.
- Original CVSS 3.1 vector and base score.
- Compensating controls in place with control reference, owner, verification method, failure mode.
- Residual likelihood and residual impact after compensating controls.
- Named risk owner and named security approver.
- Hard expiry date (default ladder: critical residual 6 months, high 12 months, medium 12 to 24 months, low 24 months).
- Trigger conditions that invalidate the exception inside the calendar window.
- Lifecycle audit trail.
Exception register:
- All approved exceptions are recorded in the security exception register.
- The register is reviewed at the cadence in Section 10.
- Exceptions are closed by remediation, renewal with fresh approval, escalation to executive risk acceptance, or cancellation. Silent expiry extension is not a closure.
- Exception count by residual rating is reported at every governance review.
Exception accumulation thresholds (programme-level):
- Active exception count growing materially across consecutive periods triggers a programme review.
- Aggregate residual risk read against the firm risk appetite at the audit committee cadence.
Reference: Security Exception Register Policy and Risk Acceptance Policy (referenced).
9. Reporting cadence and metrics
Metrics turn the policy into observable performance. Pair every metric with a frequency and a recipient so the reporting trail does not depend on memory. Audit committees read metrics that have stable definitions across reporting cycles; redefining metrics mid-cycle is the most common reason a programme loses the trend line.
Operational metrics (reported monthly to vulnerability management programme owner and head of security):
- SLA-bound closure rate per severity band, per asset tier.
- Aged-queue distribution (count of open findings past 30, 60, 90, 180 days).
- Inflow per period (findings raised) versus closure per period (findings closed).
- Mean time to detect (MTTD) and mean time to remediate (MTTR) per severity band.
- Exception count by residual rating, broken into not-yet-expired, expired, renewed.
- Compensating-control count, with median age.
- Stop-the-clock count and aggregate duration, by category.
- Breach count and breach rate per severity band.
- Re-open rate (findings closed and re-opened by a subsequent scan).
- Retest cycle time (closure to retest verification).
- Severity override count and rate.
- Coverage SLO performance (authenticated scan coverage percent, external scan coverage percent, code scan coverage percent).
Governance metrics (reported quarterly to audit committee):
- All operational metrics, summarised over the rolling quarter.
- Trend lines on the rolling twelve months for each metric so direction is visible.
- Material breaches with reason aggregation.
- Exception register growth and closure trend.
- Programme scope changes in the period (new asset class, new geography, new finding source).
- Policy-level deviations or revisions in the period.
Board metrics (reported quarterly or as material change requires):
- Headline programme performance (closure rate, breach rate, exception count) per severity band.
- Twelve-month trend lines.
- Material policy revisions and the rationale.
- Material residual risks the exception register carries.
Metric definition stability:
- All metrics are defined in this policy and revised only at policy revision points.
- A metric whose definition changes is flagged in the report so the trend line is interpreted with the change in mind.
- A metric whose denominator changes (asset estate growth, scope expansion) is reported with both the rate and the absolute count.
Distribution list:
- Operational reports: vulnerability management programme owner, head of security, business partners. Frequency: monthly.
- Governance reports: audit committee, CISO, risk committee. Frequency: quarterly.
- Board reports: board, executive sponsor. Frequency: quarterly or as material change requires.
10. Governance review cadence
Performance review answers whether the programme is hitting the cadence the policy publishes. Policy review answers whether the underlying environment still matches the rule. Both have to land in the document trail; one without the other produces either a policy nobody operates against or a programme that operates against rules the audit will challenge.
Performance review (the programme is running against the policy):
- Frequency: monthly to vulnerability management programme owner; quarterly to audit committee; annually to full board.
- Inputs: operational metrics from Section 9, breach event log, exception register, coverage SLO performance.
- Outputs: documented action register; performance trend report; escalations to the board if material.
- Recipient: audit committee receives the quarterly summary; full board receives the annual summary.
Policy review (the policy still matches the environment):
- Frequency: annual minimum; triggered by material change.
- Material change includes:
- New framework adoption (firm enters PCI DSS scope, achieves SOC 2 attestation, expands into a regulated geography, signs the EU AI Act covered-system declaration).
- New regulatory expectation (CISA directive, sector-specific rule, EU regulation update, updated framework version).
- Material asset estate change (new business unit, geography, tenant, product line, acquisition).
- Material threat environment change (new exploit class, sustained campaign on the firm sector).
- Material delivery model change (new scanner, new integration, new tooling, new tenancy model).
- Inputs: framework drift assessment; estate change record; threat environment briefing; programme performance trend.
- Outputs: policy revision draft; redline against prior version; revision rationale.
- Approver: governance approver from Section 3.
- Distribution: full distribution list rebroadcast on revision.
Audit committee questions the review answers:
- Is programme performance trending up, flat, or down across the rolling twelve months.
- Are exception count and compensating-control count trending up, flat, or down.
- Are breach reasons concentrated (capacity, dependency, severity dispute) and are the structural fixes landing.
- Is identification coverage holding (authenticated scan coverage, external scan coverage, code scan coverage).
- Is policy review keeping pace with framework adoption and estate change.
- Is the policy operating against the right severity definitions and the right asset tiering.
Board questions the review answers:
- Is the vulnerability management programme delivering durable risk reduction or accumulating exceptions.
- Is policy revision tracking framework drift and regulatory change.
- Are residual risks from exception accumulation and compensating-control accumulation visible to leadership.
- Is the programme adequately resourced for the inflow the estate produces.
11. Document control and retention
Versioning the policy is what makes the audit trail reproducible. A reviewer reading the current policy should be able to reconstruct what the rule was at any historical reporting cycle without depending on memory. Hold the revision history on the document so the trend lines in the metrics section can be read against the rules that were in force at each point.
Version control:
- Policy version number, effective date, last review date, next review date recorded at the top of every revision.
- Revision history table maintained on the policy with version, date, summary of change, approver, rationale.
- Prior versions retained according to the document control schedule below.
Revision triggers (in addition to the annual review):
- Audit recommendation requires policy change.
- Framework or regulation update requires policy change.
- Programme performance review identifies a structural gap requiring rule change.
- Material estate change requires asset tier or scope change.
- Material threat change requires severity model or window change.
- Material delivery model change (new scanner, new integration, new tenancy) requires policy change.
Revision process:
1. Policy owner drafts revision; redline against current version.
2. Vulnerability management programme owner reviews operational impact.
3. Governance approver signs the revision.
4. Revised policy distributed to full distribution list.
5. Effective date set with appropriate notice (default 30 days; emergency revisions effective immediately with documented rationale).
6. Revision logged in the revision history table; prior version archived per the retention schedule.
Document control:
- Storage location: {{POLICY_STORAGE_LOCATION}}
- Access controls: {{POLICY_ACCESS_CONTROLS}}
- Retention: {{POLICY_RETENTION_PERIOD}} (typically 7 years for financial-services and healthcare programmes; align to firm record retention policy).
- Distribution list: {{POLICY_DISTRIBUTION_LIST}}
Related documents the policy references:
- Information Security Policy (parent).
- Vulnerability Remediation SLA Policy (sibling, operationalises Section 7).
- Security Exception Register Policy (sibling, operationalises Section 8).
- Risk Acceptance Policy (sibling, governs the residual-risk decisions feeding Section 8).
- Audit Evidence Retention Policy (governs the retention of the evidence the programme produces).
- Vulnerability Disclosure Policy (governs the inbound researcher channel feeding Section 4).
- Asset Criticality Classification Policy (governs the tiering applied in Section 2).
- Incident Response Plan (operationalises the active-incident stop-the-clock condition in Section 7).
12. Signatures and acknowledgement
Sign the policy at publication and at every material revision. The signature trail is what makes the rule defensible at audit; an unsigned policy is treated as a draft regardless of how widely it is followed in practice. Acknowledgement from key stakeholders documents that the rule is known to the people who carry it.
Approval signatures (required at publication and at material revision):
Policy owner
- Name: {{POLICY_OWNER_SIGNATURE_NAME}}
- Role: {{POLICY_OWNER_SIGNATURE_ROLE}}
- Date: {{POLICY_OWNER_SIGNATURE_DATE}}
Vulnerability management programme owner
- Name: {{VM_PROGRAMME_OWNER_SIGNATURE_NAME}}
- Role: {{VM_PROGRAMME_OWNER_SIGNATURE_ROLE}}
- Date: {{VM_PROGRAMME_OWNER_SIGNATURE_DATE}}
Governance approver
- Name: {{GOVERNANCE_APPROVER_SIGNATURE_NAME}}
- Role: {{GOVERNANCE_APPROVER_SIGNATURE_ROLE}}
- Date: {{GOVERNANCE_APPROVER_SIGNATURE_DATE}}
Acknowledgement (recorded for stakeholder groups; renewed at material revision):
- Heads of business units carrying remediation responsibility: {{BUSINESS_UNIT_ACKNOWLEDGEMENTS}}
- Heads of platform and infrastructure teams: {{PLATFORM_TEAM_ACKNOWLEDGEMENTS}}
- Engineering leadership: {{ENGINEERING_LEADERSHIP_ACKNOWLEDGEMENTS}}
- GRC and compliance function: {{GRC_ACKNOWLEDGEMENTS}}
- Internal audit function: {{INTERNAL_AUDIT_ACKNOWLEDGEMENTS}}
- Procurement and vendor management (for supply-chain coverage): {{PROCUREMENT_ACKNOWLEDGEMENTS}}
Effective date once all approval signatures are collected: {{EFFECTIVE_DATE_FINAL}}
Acknowledgement evidence:
- Signed acknowledgement records stored with the policy version.
- Acknowledgement renewal cadence: at every material revision plus the annual training cycle.
- New stakeholder onboarding includes policy acknowledgement.
Eight failure modes the policy has to design against
The vulnerability management policy fails the audit read in recognisable patterns. Each failure has a structural fix the template above is designed to enforce. Read this list before you customise the template so the customisation does not weaken the discipline that makes the policy defensible.
The policy lives as a slide rather than a signed document
A leadership briefing names the programme verbally; a slide deck shows the cadence. There is no signed policy, no version history, no revision trail. The first audit question (show me the document that names this programme) cannot be answered. The fix is one signed policy, one version history, one approver per revision.
Scope is implied rather than published
The programme covers production applications by convention, but the policy does not name asset classes, environments, business units, or the boundary against vendor systems. New asset classes (cloud workloads, mobile apps, code repositories) drift in or out of the programme without a documented decision. The fix is to publish the in-scope asset classes, environments, business units, geographies, and finding sources explicitly, and to publish the out-of-scope rule alongside.
Identification cadence is undeclared
The programme runs scans on the cadence the platform team can absorb rather than on a published cadence the policy commits to. Coverage gaps and missed cycles are invisible because the policy does not declare what coverage looks like. The fix is to publish the cadence per source and a coverage SLO so a missed cycle becomes a reportable event rather than an internal scheduling miss.
Routing depends on tribal knowledge
A finding is routed to the person who happens to know the asset rather than to the named owner of the asset. Findings on assets without a clear owner stall in the queue. The fix is to publish the routing rule explicitly, maintain the asset ownership mapping on the live asset register, and route through the rule rather than through memory. The asset ownership register is the load-bearing artefact behind the routing rule.
Severity overrides flow without rationale
The vulnerability management function reduces severity from High to Medium on findings whose remediation will not fit the High-severity window. The override has no rationale recorded; the override rate climbs; the SLA performance looks better than the underlying picture. The fix is a documented severity override path with rationale on the finding and an aggregate override rate that triggers programme review when it crosses a threshold.
Exceptions accumulate without expiry discipline
Exceptions are filed and approved, then sit in the register past their original expiry while the underlying remediation is not delivered. The exception count grows; the residual risk grows; the leadership read of programme performance does not reflect the underlying picture because the breach evidence has been replaced by exception coverage. The fix is hard expiry per residual rating, renewal that requires fresh evidence and fresh approval, and an exception accumulation threshold that triggers programme review.
Metrics definitions change between reporting cycles
A metric definition is adjusted to handle a programme change (new finding source, new asset class, new severity model). The trend line breaks; the audit committee read loses comparability with prior cycles; the underlying performance change is hidden inside the definition change. The fix is metric definition stability, with definitions revised only at policy revision points and changes flagged on the report so trend lines are interpreted with the change in mind.
Policy review trails framework adoption
The firm achieves SOC 2 attestation, enters PCI DSS scope, or expands into a regulated geography without a corresponding policy revision. The programme runs against rules that no longer match the regulatory expectation; the next audit reads policy drift. The fix is a defined material change list that triggers policy review and a documented annual minimum so the policy keeps pace with the operational and regulatory environment.
Ten questions the quarterly governance review has to answer
Operational review keeps the programme on top of breaches, exception flow, and coverage gaps. Governance review answers whether the policy is delivering durable risk reduction or accumulating exceptions and breaches the audit will read as policy drift. Run these ten questions at every quarterly review and capture the answers in the governance record.
1.What is the SLA-bound closure rate per severity band over the rolling twelve months, and is the trend line up, flat, or down.
2.How many findings breached the SLA in the period, and what was the aggregate breach reason distribution (capacity, dependency, exception-pending, severity dispute, other).
3.What is the aged-queue distribution at the end of the period (count past 30, 60, 90, 180 days) and how does it compare to the prior period.
4.How many active exceptions are in the register, broken down by residual rating, and how many are approaching expiry within 60 days.
5.How many compensating controls are live, what is their median age, and how many would the policy retire if the underlying remediation shipped.
6.What is the inflow-versus-closure ratio per severity band, and is the programme on track to absorb the inflow inside the published windows.
7.How is identification coverage tracking against the SLO (authenticated scan coverage percent, external scan coverage percent, code scan coverage percent).
8.How many manual severity overrides happened in the period, what was the override rate, and is calibration drifting away from CVSS plus environmental baseline.
9.How many findings re-opened in the period, what was the re-open rate, and is verification discipline adequate to support the published SLA.
10.Has any framework, regulation, scanner, integration, asset estate, or threat environment change in the period triggered policy review, and is the review on schedule.
How the policy pairs with SecPortal
The template above is copy-ready as a standalone artefact. If your team already runs finding tracking, remediation, and compliance evidence on a workspace, the policy performance becomes the byproduct of the work rather than a separate metrics project. SecPortal pairs every finding to a versioned engagement record through findings management, so the SLA-bound closure rate, the aged-queue distribution, the exception count, and the breach evidence are one query against the same record rather than a reconstruction from spreadsheets. The activity log captures the timestamped chain of state changes by user, so the elapsed time between discovery, triage acceptance, ownership assignment, fix deployment, retest passed, and closure is observable rather than asserted.
The compliance tracking feature maps findings and controls to ISO 27001, SOC 2, Cyber Essentials, PCI DSS, and NIST frameworks with CSV export, so the policy performance can be sliced by framework when an auditor asks for the closure rate against a specific control set. The continuous monitoring feature runs daily, weekly, biweekly, and monthly schedules so the inflow side of the policy is observable rather than asserted; the inflow-versus-closure ratio in Section 9 is a query against the live record rather than a spreadsheet aggregate. The external scanning, authenticated scanning, and code scanning features cover the identification surface the policy declares in Section 4, so the coverage SLOs read against actual scanner coverage rather than against a separate coverage spreadsheet.
The document management feature carries the policy file and the revision history alongside the operational record, so the policy version in force at any reporting cycle is reconstructible from one record rather than depending on a separate document store. The team management feature carries role-based access control that decides who can apply a severity override, who can record a stop-the-clock event, who can approve an exception at each residual rating, and who runs the governance review. The AI report generation workflow produces leadership summaries from the same engagement data so the audit committee read of programme performance and the operational read are the same record rather than two independently-edited documents that diverge between reporting cycles.