Vulnerability Management Program Scorecard six domains, five tiers, one defensible read
A free, interactive vulnerability management programme scorecard. Score six capability domains (governance and ownership, asset and scope coverage, detection and intake, prioritisation and risk calibration, remediation throughput and SLA discipline, verification and audit trail) on the five-tier maturity scale (Initial, Developing, Defined, Managed, Optimised). The tool computes per-domain scores, an overall maturity rating, and a tier interpretation that turns a fuzzy "is our VM programme any good" question into one defensible read for leadership review and audit-committee briefing. Anchored to ISO/IEC 27001 Annex A 8.8, SOC 2 CC7.1, PCI DSS Requirements 6.3 and 11.3, NIST SP 800-53 RA-5 and SI-2, NIST SP 800-40r4, and CISA BOD 22-01.
Score the discipline against the live programme, not against memory
SecPortal pairs the operator queue and the leadership view to one engagement record, so the scorecard reads against findings, scans, retests, and audit evidence rather than against a separate spreadsheet. Free plan available.
No credit card required. Free plan available forever.
Score six domains, read the maturity tier, fix the lowest-scoring discipline first
The scorecard rates the operating discipline behind a vulnerability management programme across six durable domains. Score each statement on the five-tier maturity scale (Initial, Developing, Defined, Managed, Optimised). The tool calculates per-domain scores, an overall maturity rating, and a tier interpretation. The output is one defensible read of where the programme sits and a focused improvement target rather than a list of everything that could be better.
Ad hoc, reactive, dependent on individual heroics. No documented discipline; outcomes vary with the people on duty.
2Developing
Discipline exists in policy but is unevenly applied across teams or asset classes. Owners are partial; evidence is patchy.
3Defined
Documented discipline, named owners, repeatable workflow. Evidence is captured but not reviewed on a published cadence.
4Managed
Discipline is measured against published targets. Evidence is current; deviations are visible; leadership reviews on cadence.
5Optimised
Discipline improves continuously against measured outcomes. Lessons feed back into policy and tooling; the programme adapts.
Current overall score
0.00/ 5.00
Programme tier
Initial-leaning programme
Most domains operate ad hoc with limited documented discipline. The first improvement is to define ownership and write the policy. Aim to push two domains to Defined within two quarters.
Statements answered: 0 / 34
Copy the scored result as text for your programme review or export pack.
1. Governance and ownership
0.00/ 5.00
Documented policy, named owners, leadership cadence, and a clear seam between the security function and the asset owners who run the fix. Without governance, every other domain plateaus.
A vulnerability management policy is published, version-controlled, reviewed annually, and approved by the security leader.
Optimised reads: Policy is owned by a named role, reviewed against current threat conditions, and is the source of truth the operator queue runs against.
A named role owns the vulnerability management programme end to end, with documented responsibilities for detection, prioritisation, remediation, and verification.
Optimised reads: A single accountable owner exists with written responsibilities for the whole lifecycle and is the named escalation point for breaches.
Every asset owner, application owner, and service owner is mapped to the vulnerability management programme with named contacts and a documented intake path.
Optimised reads: There is a complete map from finding to fixing team, with an explicit handoff and a documented escalation when ownership is unclear.
Leadership reviews vulnerability management posture on a published cadence (weekly operational, monthly programme, quarterly leadership) using the same record the operators run on.
Optimised reads: Cadence is documented, attendance is recorded, decisions are captured, and the operator queue and the leadership view derive from one engagement record.
Budget for vulnerability management tooling, scanners, training, and remediation capacity is set annually with input from the programme owner.
Optimised reads: Spend is justified against measured outcomes (closure rate, breach rate, coverage), not anecdotes; capacity gaps surface in budget cycles.
2. Asset and scope coverage
0.00/ 5.00
Every asset class in scope has a named owner, documented coverage, and a defined detection of scope drift. Findings against unowned assets are the most common silent gap.
A maintained asset inventory exists for every asset class in scope (external attack surface, internal infrastructure, applications, code repositories, cloud-hosted resources).
Optimised reads: Inventory is reconciled on a published cadence against discovery sources, gaps are flagged, and inventory ownership is named.
Every asset class in scope has a named owner, a documented criticality, and an explicit handoff path for findings.
Optimised reads: Asset-class ownership is unambiguous; criticality drives prioritisation; the handoff is operationally tested rather than documentary.
Scope drift (new assets added without coverage, decommissioned assets still scanned, ownership changes without handoff) is detected within one cadence window.
Optimised reads: Drift is detected by a documented mechanism; the next cadence run picks up new scope; there are no orphan findings against unowned assets.
Third-party assets, vendor-managed services, and shared-responsibility cloud surfaces have a documented detection and intake path.
Optimised reads: Third-party coverage is mapped to the responsibility split; vendor advisories are ingested; shared cloud findings have a named receiver.
Shadow IT, unmanaged repositories, and unsanctioned SaaS are detected through a documented mechanism rather than incident discovery.
Optimised reads: Discovery runs proactively; unmanaged surface is sized; the programme has a documented onboarding path back into managed scope.
3. Detection and intake
0.00/ 5.00
Scanners run on a documented cadence with documented coverage. Findings from third parties, advisories, and bug bounty land on the same operator record. Detection that does not become an intake is detection that does not move.
External, authenticated, code, and cloud scanners run on documented cadences with named owners; cadence drift is visible.
Optimised reads: Each cadence is set per asset class, anchored to framework expectations, and the scan diff between runs is reviewed.
Scanner coverage is documented per asset class, with named blind spots and the intake path that compensates for each blind spot.
Optimised reads: Coverage is visible to leadership; blind spots are explicitly accepted with compensating intake; coverage gaps narrow over time.
Every scanner finding is validated against an evidence pack before it enters the priority queue; false positives are recorded against the rule rather than against the finding.
Optimised reads: Validation is a documented workflow; false-positive learning feeds back into scan tuning; tester-confirmed findings are distinguished from scanner-only.
Third-party pentest, bug bounty, advisory ingestion, and customer-reported findings land on the same operator record as scanner findings.
Optimised reads: Source-agnostic intake exists; the operator queue is one queue; the lineage to the source is preserved.
Findings across scanners and engagements are deduplicated against a persistent finding identifier so the same vulnerability does not generate parallel work.
Optimised reads: Deduplication is automatic where possible and reviewed where ambiguous; the persistent identifier survives engagement boundaries.
New vulnerability advisories (CISA KEV additions, vendor advisories, open-source CVEs) are ingested within one business day and triaged against the live asset inventory.
Optimised reads: Ingestion is documented; KEV additions trigger a same-day queue review; the time from advisory to triaged finding is measured.
4. Prioritisation and risk calibration
0.00/ 5.00
CVSS is the start, not the end. Calibrate with EPSS, KEV, asset criticality, exposure, and compensating controls so the priority queue reflects business risk rather than only inherent severity.
Every finding has a CVSS 3.1 base vector and a calibrated severity that records environmental and temporal context.
Optimised reads: CVSS is captured as the vector, not the rounded score; environmental calibration is documented; the rationale is on the finding.
EPSS exploit-likelihood percentile is captured against findings where it materially changes priority and is reviewed on advisory cycles.
Optimised reads: EPSS is one of the inputs to the priority decision; high EPSS percentiles are treated as priority signals even when CVSS is lower.
CISA KEV listing is recorded against findings and triggers an SLA tier change when the listing is added inside the open window.
Optimised reads: KEV state is current; KEV-listed findings have a documented SLA floor (CISA BOD 22-01 alignment); KEV is reviewed at every advisory cycle.
Asset criticality (regulated data, customer-facing exposure, business-critical service) is captured on the engagement scope and feeds the priority decision.
Optimised reads: Asset tier is documented per asset class; priority calibration uses asset tier as a multiplier; tiering is reviewed annually.
Exposure and reachability (internet-facing, authenticated only, internal lateral, network-segmented) is captured on findings and downgrades or upgrades priority accordingly.
Optimised reads: Exposure is a recorded field on the finding; reachability is reviewed; the priority decision separates exposed from segmented.
Compensating controls (WAF rules, network segmentation, monitoring, MFA) are captured against findings and recorded as the basis for residual-risk calibration.
Optimised reads: Compensating controls are linked to findings, are re-validated at each priority review, and inform the exception register where they extend SLAs.
5. Remediation throughput and SLA discipline
0.00/ 5.00
Findings move at a published rate. Closure, MTTR, breach rate, reopen rate, and exception handling are all measured. The programme answers "are we closing what we open" rather than "are we busy".
Framework anchor: ISO 27001 Annex A 8.8 / SOC 2 CC7.1 / PCI DSS 6.3.3 / NIST SP 800-53 SI-2 / CISA BOD 22-01.
A vulnerability management SLA policy is published with severity-tier targets that are reviewed annually and tracked against operational closure data.
Optimised reads: SLAs are anchored to framework expectations, breach rate is measured, and the SLA policy adapts to KEV and EPSS signals.
Closure rate (findings closed inside the period divided by findings opened inside the period) is measured and reviewed at every leadership cycle.
Optimised reads: Closure rate trend is visible per severity, asset class, and owner; declines below one trigger a remediation capacity review.
Mean time to remediate is measured per severity tier, broken out by asset class, and reviewed against the SLA target rather than against an internal benchmark.
Optimised reads: MTTR is a programmatic measure that drives capacity decisions and is paired with breach rate so the metric cannot be gamed by deferral.
SLA breach rate (findings that crossed the SLA boundary while open) is measured and reviewed at every leadership cycle.
Optimised reads: Breach rate has a target; persistent breaches against an asset class trigger an exception decision or a capacity uplift.
Vulnerabilities that will not meet their SLA enter a formal exception register with hard expiry, compensating control, named risk owner, and named approver.
Optimised reads: No silent missed SLAs; the exception register is the authoritative ledger; expiry and review cadence are tracked on every entry.
Reopen rate (findings that were closed and later reopened) is measured at thirty, ninety, and one-hundred-and-eighty day windows so closure quality is visible.
Optimised reads: Reopen rate is the durability axis paired with closure speed; trends inform retest discipline and remediation craft.
6. Verification and audit trail
0.00/ 5.00
Closure is not closure until it is verified. Retest evidence, lifecycle log, framework mapping, and an audit-ready evidence pack are produced as a side effect of the work rather than at audit week.
Every closed finding is retested against the original finding record with documented evidence; retest pass / fail is captured per attempt.
Optimised reads: Retest is the closure gate; retest evidence is preserved with the original finding; failed retests reopen rather than mask.
Audit evidence (scan output, configuration export, change ticket, retest pack, attestation) is captured per finding and per control on a documented cadence.
Optimised reads: Evidence is reproducible from a system of record; currency status is tracked per entry; expired evidence is treated as a control event.
Every status transition (opened, validated, prioritised, assigned, in-progress, closed, retested, reopened) is captured automatically with timestamp and actor.
Optimised reads: Lifecycle is recorded as a side effect of the work; the audit narrative reads the live record rather than a reconstructed one.
Findings and controls are mapped to ISO 27001, SOC 2, Cyber Essentials, PCI DSS, and NIST framework references that the programme has to evidence.
Optimised reads: Mapping is current; the same record produces evidence for multiple frameworks; CSV export is supported for assessor handoff.
The programme can produce a per-control evidence pack on demand without an audit-week scramble.
Optimised reads: Evidence packs are generated from the live record; the audit week is a delivery exercise rather than a discovery exercise.
Lessons from each programme review (gaps, near-misses, breaches, audit findings) feed back into policy, scanner tuning, prioritisation rules, and SLA targets.
Optimised reads: Improvement loop is documented; one domain advances at least one tier per year; the scorecard moves over multi-year horizons.
How to read the scorecard, not just the score
The overall number is the headline; the leverage sits in the per-domain and per-statement read. These five rules turn the score into an improvement plan rather than another dashboard.
1.Read the lowest-scoring domain first. The biggest leverage is moving the weakest discipline up one tier, not making the strongest discipline incrementally stronger.
2.Inside that domain, read the lowest-scoring statement. The improvement intent is written as the "Optimised reads" one-liner; that is the target state for the next quarter.
3.Move one domain at a time. Programmes that try to lift all six domains in parallel produce six half-finished initiatives and the scorecard moves zero tiers in eighteen months.
4.Re-score quarterly. The scorecard is a programme-level read, not a weekly tracker. The right cadence is the same as the leadership reporting cycle.
5.Resist the urge to round up. The scorecard only has value if the score reflects what is happening on the operator queue. A polite Defined that should read Developing hides the gap rather than fixing it.
How the scorecard pairs with SecPortal
The scorecard above is usable as a standalone artefact. If your team already runs finding tracking, scan execution, and compliance evidence on a workspace, the scorecard becomes a read of the live programme rather than a separate document. SecPortal keeps the operator queue and the leadership view derived from one engagement record through findings management (CVSS 3.1 vector with environmental and temporal calibration, persistent finding identifier, retest evidence, status lifecycle), so the prioritisation, remediation, and verification domains read against the live record rather than against memory.
The detection and intake domain reads against external scanning (sixteen modules covering TLS posture, headers, fingerprinting, DNS, exposed services), authenticated scanning (cookie, bearer, basic, form login modes against authenticated surfaces), and code scanning (Semgrep SAST plus SCA via GitHub, GitLab, or Bitbucket OAuth). The cadence question reads against continuous monitoring (daily, weekly, biweekly, or monthly schedules with a scan diff endpoint that surfaces new, fixed, and unchanged findings between runs).
The verification and audit-trail domain reads against the activity log (timestamped lifecycle for findings, engagements, scans, credentials, documents, comments, invoices, and team changes with plan retention of thirty, ninety, or three hundred and sixty-five days) and compliance tracking (mappings to ISO 27001, SOC 2, Cyber Essentials, PCI DSS, and NIST with CSV export). The governance domain reads against team management (RBAC tiers and MFA enforcement) and the AI report workflow that produces leadership summaries from the same engagement data the operator queue uses.