Use Case

Security testing program management
one record across every engagement, vendor, and asset

Run a security testing programme as one record rather than a folder of disconnected reports. Track every engagement across the portfolio, every vendor delivering work, and every asset under coverage; surface findings, retests, SLA performance, and aging risk on one dashboard so the next board update writes itself from delivery rather than from a memory of last quarter.

No credit card required. Free plan available forever.

Run the testing programme as a record, not a folder of reports

Most security testing programmes are managed as a calendar of vendor engagements and a folder of PDF reports. The annual plan is a slide deck, the quarterly review is a regenerated table from spreadsheets, and the board update is whichever findings the team remembers from the last engagement. The mode of operation works while the programme is small; it breaks the moment the asset estate or the vendor list grows past what one person can hold in their head.

SecPortal models a security testing programme as a parent record that aggregates every engagement, every finding, every retest, and every SLA commitment across the estate. The programme is vendor-agnostic and engagement-shape-agnostic: a quarterly external pentest, a weekly authenticated scan, a code review on a critical repository, and a retest of a finding from last year all sit on the same record. The next coverage decision, the next renewal conversation, and the next board update all run from one source of truth instead of from three different reconstructions.

The five layers of a programme record

A security testing programme is more than a list of engagements. Five layers stack on the programme record, each answering a question that the layer below cannot answer on its own.

Asset coverage map

The list of assets the programme is committed to test, grouped by type (web applications, external infrastructure, internal infrastructure, cloud accounts, mobile apps, code repositories, OT/ICS zones). Each asset carries the test type, cadence, last tested date, and the next scheduled engagement. Coverage gaps appear on the dashboard rather than on an audit finding two months later.

Engagement portfolio

Every pentest, scan window, code review, and retest opens as an engagement linked to the parent programme. Engagements can be delivered by an internal team, a primary vendor, or a panel of vendors run on rotation. The programme is vendor-agnostic; what matters is that each piece of work is on the same record.

Findings catalogue

Findings persist across engagements, vendors, and reporting cycles. A finding raised in a Q1 web app pentest is still tracked when the same asset is retested in Q3 by a different vendor, with the same identifier and the same remediation owner. The catalogue is the durable artefact; the report is a snapshot of it.

Remediation and retest ledger

Each finding carries its remediation owner, target SLA, retest history, and verification status. Closed findings stay on the record with the close-out evidence. Aging findings flag at the SLA threshold. The retest ledger is the proof that the programme delivers fixes rather than just identifies issues.

Reporting and board view

The quarterly or annual programme report pulls from the live record: engagements delivered, coverage achieved, findings opened and closed, retests completed, SLA performance, and the aging picture. AI-assisted reporting drafts the executive summary against real delivery so the board update is evidence rather than narrative.

How programme management differs from adjacent workflows

Programme management is often confused with single-engagement delivery, retainer management, continuous testing, or vulnerability management. The four adjacent workflows each solve a different problem; the programme record carries the layer above them.

Single engagement project management

Single-engagement project management runs one pentest from scoping to delivery. It owns the scope document, the team assignment, the kickoff meeting, the daily check-ins, the report, and the retest. Programme management runs above it: it coordinates many engagements over months and years, and it carries the cross-engagement signal (aging findings, repeat issues, coverage gaps) that no single engagement can see.

Pentest retainer management

Retainer management is a commercial parent layer for one client relationship: the contracted block of hours or test count, the drawdown ledger, the billing cadence, and the renewal terms. Programme management is the operational parent layer: it spans multiple clients (for a vendor), multiple vendors (for a buyer), and asset groups that may sit outside any single retainer. A programme can include retainers, but it is broader than them.

Continuous penetration testing

Continuous testing is an always-on technical pattern: scheduled scans, live findings, retests paired to originals, and reports generated on demand. Programme management is the planning and oversight layer above it. A programme often includes both continuous testing for fast-moving assets and discrete scheduled engagements for the rest of the estate.

Vulnerability management programme

A vulnerability management programme typically begins after an engagement closes and the findings have been handed over. Programme management starts earlier: at the planning of which engagements run when, which assets they cover, and which vendors deliver them. The two are complementary; the testing programme produces the catalogue that the vulnerability management programme then runs against.

Where programmes usually go wrong

Five failure modes account for most of the programme attrition between an ambitious annual plan and a quiet renewal year later. Each is silent during delivery and loud at the compliance audit or the budget conversation.

Engagements live in vendor portals, not on a programme record

Each vendor delivers reports through its own portal or as a standalone PDF. The buyer has no consolidated view; the next coverage decision is made from a folder of files rather than from a dashboard. When the auditor asks for the programme view, the team spends a week stitching it together by hand.

Findings reset every engagement

A finding raised in a Q1 pentest is closed-out at delivery. When the same vendor or a different vendor retests in Q3 and finds the same issue, it lands as a brand-new finding with a new identifier. Repeat issues are invisible because the catalogue does not span engagements; the same critical finding gets reported, fixed, and reintroduced over and over.

Coverage map lives in a slide deck

The annual testing plan was approved at the start of the year as a deck. Six months in, nobody knows which assets have actually been tested, which are scheduled, and which slipped. The coverage gap is discovered at the next compliance audit when an asset that should have been tested twice in the year was not tested at all.

SLA tracking is per-engagement, not per-finding

Each engagement reports its own SLA performance in its closing slide. The aggregate SLA picture across the programme is never assembled because no record carries it. Aging risk debt accumulates silently, the board update reports per-engagement satisfaction, and the actual mean-time-to-remediate trend is never visible.

Vendor performance is anecdotal

The decision to renew, replace, or expand a vendor is made on partner-level relationships and the most recent engagement memory rather than on data. There is no record of how many findings each vendor produced, how many were valid, what the false positive rate was, or how their SLA performance compared. The next procurement runs on vibes.

How programme management looks in SecPortal

Programme management is one workflow stitched into several feature surfaces: the engagement record (parent and child), findings management, AI reports, the activity log, and the branded portal. The programme is structured rather than ad-hoc, and it produces the coverage, findings, retest, and SLA picture from the live record rather than from a reconstruction.

Programme parent record

The programme record sits at the workspace or client level, parented by engagement management. Asset coverage targets, planned engagement cadence, named owner, and SLA tier are captured at programme open and inherited by every child engagement opened against it.

Cross-engagement findings

Findings live in findings management at the workspace level so identifiers persist across engagements, vendors, and reporting cycles. Repeat issues on the same asset are flagged automatically rather than landing as new findings.

Programme reporting

The quarterly and annual programme reports draw from AI-assisted reporting against the live record. The executive summary is drafted from real delivery rather than from a recap document so the board update is evidence-grounded.

The six metrics every programme dashboard should carry

Programme reporting is judged by whether the next conversation can be had against the dashboard rather than against a recap document. Six metrics cover the questions a board, an auditor, and a budget owner ask.

MetricWhat it answers
Coverage achieved against the planThe percentage of planned engagements completed in the period and the percentage of the asset estate covered against the cadence target. The metric exposes silent slippage where engagements were planned but not booked, or were booked but moved out of the period.
Findings opened and closed by severityThe volume of findings opened in the period, broken down by severity, and the volume closed in the same period. The opened-versus-closed ratio is the running indicator of whether the programme is keeping up with discovery; a sustained inflow that exceeds outflow is the leading signal of compounding risk debt.
Mean time to remediate by severityThe average time from finding open to finding close, segmented by severity. The number is the operational truth behind the SLA commitment; it is the metric the board update should cite rather than a generic statement that critical issues are addressed promptly.
Aging open findings against SLAThe count of findings that have exceeded their target remediation SLA, broken down by severity and by owning team. Aging findings are the inventory of risk debt; the trend across quarters is the strongest signal of programme health.
Repeat finding rateThe proportion of findings in the period that match a previously-closed finding on the same asset. A high repeat rate indicates that fixes are not durable, that the same misconfiguration keeps reappearing, or that the remediation process is closing without verifying. The metric is impossible to calculate without a cross-engagement catalogue.
Retest completion and verification rateThe proportion of remediated findings that have been formally retested and verified, versus those marked closed without retest. The gap between closed and verified is the credibility gap of the programme; closing the gap is what makes the catalogue defensible to an auditor.

Reviewer checklist for a healthy programme

Before a programme is treated as in good standing for the quarter, the programme owner runs through a short checklist. Each line takes seconds; missing any one of them is the source of the failure modes above.

  • Programme record exists for the calendar year with the planned engagements, asset coverage targets, and named owner.
  • Every engagement run in the period is linked to the programme record, regardless of which vendor or internal team delivered it.
  • Findings catalogue carries identifiers that persist across engagements and vendors so repeat issues are flagged automatically.
  • Each finding has a remediation owner, target SLA date, current status, and (where remediated) a retest record with verification evidence.
  • Coverage map shows the percentage of the asset estate tested against the cadence target and flags any asset that has slipped its window.
  • Aging dashboard shows open findings past their SLA, broken down by severity and owning team, with the trend across the last four quarters.
  • Vendor performance view shows findings produced, valid rate, false positive rate, and SLA performance per vendor over the period.
  • Quarterly programme report can be generated from the live record without a manual recap pass; the board pack is the dashboard plus a generated executive summary.

Where programme management sits across the testing lifecycle

The programme is the operational parent layer above each engagement and is the read-up layer for leadership. Each child engagement still runs scoping, kickoff, delivery, reporting, retest, and close-out as a normal pentest; the programme carries the cross-cut signal that no single engagement can produce.

Upstream and downstream

The programme parents pentest project management for each child engagement, plus continuous penetration testing for always-on coverage and retesting for verification.

Findings, evidence, and remediation

The programme aggregates remediation tracking and pentest evidence management so the audit-ready picture is current rather than reconstructed.

Compliance and assurance

The programme record is the testing evidence package for ISO 27001, SOC 2, and PCI DSS cycles, alongside compliance audits workflow.

Commercial layer

For vendor-side delivery, the programme can be funded by a pentest retainer; the retainer is the commercial relationship and the programme is the operational picture above it. When a buyer runs more than one approved vendor, the supplier-side governance lives on the parallel pentest vendor panel record so the programme decides what gets tested and the panel decides who tests it.

Pair the workflow with the long-form guides

Programme management is operational; the surrounding guides and research explain the leadership trade-offs that show up at planning and at board-level reporting. Pair this workflow with the writeup on the vulnerability management programme guide for the post-test catalogue layer, the CISO security metrics dashboard guide for the leadership reporting view, the enterprise security programme maturity guide for the maturity model, the managing multiple security engagements guide for the day-to-day coordination, the aging pentest findings research for the risk debt argument that the programme record makes visible, and the security tool consolidation workflow for the migration that lets the programme run on one record rather than across the scanner, ticket queue, spreadsheet, and report drive patchwork most teams inherit. When the programme has to absorb a corporate-development cycle, the M&A security due diligence workflow runs the deal as its own engagement on the same workspace and rolls the post-close backlog into the cross-engagement view alongside business-as-usual assessments.

Buyer and operator pairing

Internal security teams running the testing programme in-house

Internal security functions that own the annual testing plan use the programme record to plan engagements, track delivery, and report up to leadership. The programme spans whatever delivery model the team uses: internal pentesters, external vendors on rotation, or a mix of both.

vCISOs running the testing programme for multiple clients

Fractional and virtual CISOs running the testing programme for a portfolio of client organisations use one programme record per client. Each client gets its own coverage map, engagement portfolio, findings catalogue, and board view, while the vCISO sees the aggregate workload across all clients in a single account.

MSSPs delivering managed security testing services

MSSPs running security testing as part of a managed services umbrella use the programme record to deliver oversight to the client and coordination to the delivery team. The branded portal carries the programme view to the client; the internal record carries the operational view to the MSSP.

Compliance and assurance teams managing testing evidence

Teams responsible for SOC 2, ISO 27001, PCI DSS, or sectoral assurance use the programme record as the testing evidence package. Coverage, findings, retests, and SLA performance are produced from the live record for each audit cycle without the team rebuilding the picture from individual reports.

Who runs this workflow

Programme management is owned by internal security teams, vCISOs, MSSPs, and compliance consultants, supported by pentest firms and security consultants who deliver the underlying engagements. The programme is the relationship between leadership and delivery; the engagements under it are the work.

What good programme management feels like

One dashboard, no reconstruction

Coverage, findings, retests, SLA performance, and aging are all on one programme dashboard. The quarterly review is reading the dashboard rather than rebuilding it from individual reports the week before the meeting.

Board updates run on evidence

The board update cites engagements delivered, coverage achieved, mean time to remediate, aging open findings, and SLA performance directly from the record. The number on the slide is the same number on the dashboard.

Audit evidence on demand

The auditor asks for the testing evidence package for the period and the programme owner exports it from the record. The activity log carries the audit trail and the findings catalogue carries the cross-engagement history.

Vendor decisions on data

The decision to renew, replace, or expand a vendor runs from the programme record: findings produced, valid rate, false positive rate, and SLA performance per vendor over the period. The next procurement is grounded in delivery rather than in partner relationships.

Security testing programme management is the workflow that decides whether a year of testing investment compounds into a credible security posture or evaporates into a folder of disconnected reports. Get it right and each engagement strengthens the catalogue, each retest closes a real risk, and each board update is read off the dashboard; get it wrong and the next compliance audit and the next budget conversation both run on guesswork.

Frequently asked questions about security testing programme management

What is security testing program management?

Security testing program management is the workflow of planning, coordinating, and reporting on every security testing engagement an organisation runs across its asset estate over a defined period. It spans engagement scheduling, vendor coordination, findings aggregation, remediation tracking, retest verification, and programme reporting. It sits above single-engagement project management and is broader than a single client retainer; it is the operational layer that owns the annual or multi-year testing plan.

How is programme management different from single-engagement project management?

Single-engagement project management runs one pentest from scoping through to delivery: scope, kickoff, testing, findings, report, and retest. Programme management runs above it across many engagements over months and years. Project management owns the engagement record; programme management owns the engagement portfolio, the asset coverage map, the cross-engagement findings catalogue, and the cadence the programme is committed to.

How is programme management different from a pentest retainer?

A pentest retainer is a commercial agreement for one client: a contracted block of hours, test count, or asset coverage with a drawdown ledger, billing cadence, and renewal terms. A programme spans multiple clients, multiple vendors, and asset groups that may sit outside any single retainer. Programmes can include retainers, but a programme is the broader operational picture that owns delivery oversight and reporting; the retainer is the commercial relationship that funds part of it.

Can SecPortal track engagements delivered by external vendors?

Yes. The programme record is vendor-agnostic. Engagements delivered by an internal team, a primary external vendor, or a panel of vendors all open as engagements linked to the same programme record. Findings, evidence, AI reports, and the branded portal sit on each engagement; the programme aggregates them. The buyer can run the programme even when the actual testing is delivered by parties outside SecPortal, by importing the engagement output and managing the catalogue centrally.

How does the findings catalogue persist across engagements?

Findings live on the workspace and the client record rather than only on the engagement that opened them. When a new engagement opens against an asset that already has historical findings, the catalogue surfaces the prior issues, their status, and their retest history. New findings are deduplicated against the existing catalogue at intake so a repeat issue is flagged automatically rather than landing as a brand-new finding under a new identifier.

How does programme reporting work?

The programme dashboard and the AI-assisted reporting draw from the live record. The quarterly or annual programme report cites engagements delivered, coverage achieved against the plan, findings opened and closed by severity, mean time to remediate, aging open findings, repeat finding rate, retest completion rate, and SLA performance. The executive summary is drafted from the actual figures rather than from a recap document, so the board update is evidence rather than a memory of last quarter.

Which roles run the programme view?

The programme view is read by security leadership, compliance teams, and account owners; it is operated by the engagement leads, programme managers, and vCISO consultants who coordinate the underlying delivery. Role-based access controls (owner, admin, member, viewer, billing) govern who can edit the programme record, open engagements, edit findings, or read the reporting view, so the programme can be shared with stakeholders without giving them write access to the underlying engagements.

How does the programme record support compliance evidence?

The programme record carries the artefacts that compliance auditors look for: the testing plan, the engagement portfolio executed against it, the findings catalogue, the remediation status, the retest evidence, and the SLA performance. For SOC 2, ISO 27001, PCI DSS, and sectoral assurance, the audit evidence package is generated from the live record rather than reconstructed from a folder of standalone reports. The activity log carries the audit trail of who changed what and when.

How it works in SecPortal

A streamlined workflow from start to finish.

1

Map the asset estate and the testing cadence

Capture the asset groups under coverage (web applications, external infrastructure, internal infrastructure, cloud accounts, mobile apps, code repositories, OT/ICS zones), the test type each group needs (pentest, scan, code review, retest), and the cadence each group is committed to (annual, biannual, quarterly, continuous). The map becomes the planning surface for the programme; gaps in coverage are visible on the dashboard rather than discovered at audit.

2

Open every engagement against the programme record

Each pentest, scan window, or code review opens as an engagement linked to the parent programme. Engagements can be delivered by an internal team, a primary external vendor, or a panel of vendors run on rotation; the programme is vendor-agnostic. Scope, ROE, kickoff, evidence, findings, and reports stay on the engagement; the programme aggregates them.

3

Aggregate findings across engagements without resetting them

Findings persist across engagements, so a finding raised in a Q1 web app pentest is still tracked when the same asset is retested in Q3 by a different vendor. Duplicate detection, severity calibration, and remediation status are programme-level signals rather than per-report attestations. Aging risk debt becomes visible across the full portfolio.

4

Track retests, SLAs, and remediation performance

Each finding carries its remediation owner, target SLA, retest history, and verification status. The programme dashboard surfaces overdue findings, SLA breaches, retest backlogs, and the assets that consistently produce repeat findings; the metrics are the same numbers the next board update needs to cite.

5

Generate the programme report on demand

The quarterly or annual programme report pulls from the live record: engagements delivered, coverage achieved, findings opened and closed, retests completed, SLA performance, and the aging picture. AI-assisted reporting drafts the executive summary against real delivery rather than a recap deck so the report is evidence rather than narrative.

Run the programme as a record, not a folder of reports

Aggregate engagements, findings, retests, and SLA performance on one programme dashboard. Start free.

No credit card required. Free plan available forever.