Research13 min read

The Pentest Delivery Gap: SLAs, Retests, and Portal Economics

The problem with modern penetration testing is rarely the testing. The problem is the gap between how pentests are produced (a fixed-scope engagement that ends with a PDF report) and how security teams actually need to consume them (continuously, with tracked remediation, retests included, and audit-grade evidence). This research examines that delivery gap through three lenses that buyers and consultancies are increasingly negotiating in writing: communication SLAs, retest policy, and portal economics.1,2,3

The argument is straightforward. Detection is no longer the bottleneck. The bottleneck is the time, friction, and evidence loss between a finding being discovered and a finding being verifiably closed. CISA's Known Exploited Vulnerabilities Catalog and SSVC framework exist precisely because prioritisation and remediation velocity matter as much as detection.3,4 Verizon's 2025 DBIR reports vulnerability exploitation as a major and growing initial access vector.5 NIST CSF 2.0 frames cybersecurity as a governed, risk-managed discipline, not a control checklist.7 The standards have already moved. Pentest delivery is catching up.

What the delivery gap actually is

A traditional penetration test runs to a fixed scope, over a fixed window, and produces a static PDF report after the testing window closes. NIST SP 800-115 describes this lifecycle in clear terms: planning, discovery, attack, and reporting.1 The methodology is sound. The delivery model around it is not the same problem.

The gap is operational. Findings discovered on day one of a two-week engagement may not reach the client until the report meeting three weeks later. Retests are usually scoped as a separate engagement. Remediation status lives in the client's ticketing system, not the assessment record. Evidence of closure depends on the client emailing screenshots back. None of this is the tester's problem; all of it is the buyer's problem.

That gap has commercial consequences. Clients who have used a portal-based delivery model from any vendor rarely accept a PDF-only engagement again. Consultancies that have not adapted experience longer remediation cycles, more email overhead, weaker retention, and pricing pressure from firms that have. For a deeper buyer-side framing, see our guide on penetration testing as a service and the longer-form research on security workflow orchestration.

Communication SLAs: the hours, not the report meeting

The most consequential SLA in a modern pentest is not the report turnaround. It is the time from discovery of a critical finding to the moment the right person at the client knows about it. Two weeks is no longer tenable.

CISA's Known Exploited Vulnerabilities Catalog publishes due dates for remediation because, in practice, time from disclosure to active exploitation has shrunk to days for some vulnerability classes.3 The UK NCSC's vulnerability management guidance similarly emphasises that prioritisation and timely remediation are central to effective vulnerability management, not optional extras.8 A pentest delivery model where critical findings sit in a draft document for a week is structurally out of step with the threat environment.

A defensible communication SLA looks roughly like this:

  • Critical: communicated within the same business day, ideally within hours.
  • High: communicated within one business day.
  • Medium: visible in the portal as findings are logged; summarised at scheduled checkpoints.
  • Low and informational: consolidated into the final report.

The mechanism that makes this credible is a shared system of record. If both client and tester see the same findings list with the same severity scoring as testing happens, the SLA is operational, not aspirational. CVSS 3.1 vector strings, applied consistently, give that scoring a defensible baseline; the CVSS calculator and the CVSS scoring guide both walk through how to do this without invented numbers.

Retest policy is where pentest economics break

Retests are the most undervalued line item in a pentest contract. They are also where the difference between findings discovered and findings closed actually shows up.

There are four common retest models in current contracts:

1. No retest included

The cheapest headline price. Also the model most likely to leave findings half-closed. Clients quietly deprioritise verification because it is a separately-scoped purchase, and the assessment record stops being a reliable view of remaining risk.

2. One retest per finding

Fine for clean, single-attempt fixes. Penalises clients with partial remediation, dependency-driven regressions, or findings that span multiple components. Encourages either marking findings closed prematurely or paying for ad-hoc retests outside the contract.

3. Unlimited retests within a window

The model most aligned with verified closure. The window is usually 60 to 90 days post-engagement, during which the client can request retests on any finding without additional procurement. Operational cost stays manageable because most retests are short and scoped to the original finding.

4. Continuous retest in a subscription

Retests are built into a recurring engagement so findings persist between assessment windows and remediation status carries forward. This is the strongest fit for clients running an ongoing AppSec programme, and the model that PTaaS providers have largely standardised on.

The economic logic is simple. Verified closure is the actual deliverable a security buyer is paying for. A retest model that makes verification expensive or procedurally awkward will systematically underdeliver on that outcome, no matter how strong the testing was. For the operational mechanics of running retests well, see how to retest vulnerabilities and the remediation tracking workflow.

Portal economics: build, buy, or default to PDFs

Most consultancies face a three-way choice. Build a delivery portal in-house. Buy one and integrate it into the engagement workflow. Or stay with PDFs and accept the friction. Each option has knock-on effects on margin, retention, and brand.

Build

A custom portal looks attractive on a whiteboard. In practice, the engineering surface area is large: authentication, RBAC, multi-tenancy, encrypted credential storage, finding schemas, file uploads, audit trails, scanner imports, report rendering, invoicing, and ongoing maintenance. The cost of building this rarely shows up against the cost of buying it until 18 months in.

Buy

Adopting an existing platform pushes the engineering off the consultancy and lets the firm focus on testing and client relationships. The decision becomes a brand decision, not a build decision. Branded portals on a subdomain the consultancy controls preserve the relationship with the client; generic white-label portals do not.

Default to PDFs

Stable, comfortable, and increasingly uncompetitive. The cost of this choice is paid in invisible places: slower remediation, more email overhead, weaker retention against firms that have moved, and downward pricing pressure as portal-based delivery becomes the implicit baseline.

SecPortal's positioning falls in the buy lane. Engagement management, findings management, AI-assisted reporting, and a branded client portal on a subdomain the consultancy controls share one workspace, so the firm does not pay the integration tax of stitching point tools together.9,10,11,12 The argument here is architectural, not promotional: a delivery layer that already exists is cheaper than one a services business has to keep maintaining.

Auditors still want a final report

One mistake in modernising delivery is treating the portal as a replacement for the report. It is not. PCI DSS requirement 11.4 expects documented penetration testing with retained results.2 ISO 27001 Annex A controls expect evidence of vulnerability identification and treatment. SOC 2 Trust Services Criteria require documented testing and remediation. Auditors ask for a static, signed final report because point-in-time evidence is what their methodology assumes.

The strongest delivery model produces both. The portal handles live findings, retests, remediation, and ongoing communication. The final report captures the engagement as a defensible artefact for audit. AI assistance can shorten report turnaround without removing the tester from the loop, and consistent CVSS scoring means the report and the portal agree on severity. See the security assessment report template and how to write a pentest report for the structural anatomy of a report that survives an auditor and an engineer.

What buyers should actually ask in procurement

The questions below cut through the marketing layer of most pentest proposals and surface the real shape of the delivery model. They are designed to be answered in writing, not in a sales call.

  • Critical-finding SLA: what is the maximum elapsed time from discovery of a critical to written notification of a named contact, and what is the medium for that notification?
  • Retest policy: how many retests are included per finding and per engagement, within what window, and what triggers a billable retest?
  • Portal access: is there a live findings portal, and does it persist after the engagement closes? Who controls the subdomain and the brand on the portal?
  • Methodology mapping: is the engagement mapped to a public methodology (NIST SP 800-115, OWASP WSTG, PTES) and does the report cite which checks were performed?1,6
  • Severity scoring: are findings scored with CVSS 3.1 vector strings (not just colour labels), and is the vector visible in both the portal and the report?
  • Final report format: is a downloadable, signed PDF produced for every engagement, and does it satisfy your auditor's expectations?
  • Data handling: where are findings stored, are credentials and evidence encrypted at rest, and how is data deleted at end of contract?
  • Continuity: if the testing team changes between engagements, how is methodology and finding history preserved?

What consultancies should change first

For consultancies that have not yet moved beyond PDF delivery, three changes account for most of the commercial improvement:

  1. Adopt a delivery platform. Findings, reports, retests, and remediation status all live in one place. The branded portal is the layer the client sees, and it should run on a subdomain the firm controls.
  2. Codify the SLA. Put the critical-finding communication SLA in the engagement letter and in the portal's notification configuration. Make it a feature, not a promise.
  3. Restructure retests. Move from no-retest or one-retest pricing to unlimited retests within a defined window, costed into the headline price. Verified closure becomes the deliverable, and clients stop optimising around procurement.

The operational guides on scaling a security consultancy and managing multiple security engagements cover the day-to-day mechanics of running this model at volume. The persona pages for cybersecurity firms, security service providers, and MSSPs map the same delivery model to different organisational shapes.

Conclusion

The delivery gap is the part of pentesting that has lagged the rest of the discipline. Detection is mature. Methodology is documented in NIST SP 800-115, OWASP WSTG, and PTES. Severity scoring has CVSS. Frameworks from NIST CSF 2.0 to PCI DSS to ISO 27001 already expect vulnerability identification, prioritisation, and remediation as a connected workflow.1,2,6,7 What has not kept pace is the layer that translates testing into outcomes the client can act on, audit, and renew.

Closing the gap is not a methodology question. It is a delivery question. SLAs that match the threat environment, retest policy that aligns with verified closure, and a portal that both consultancy and client can rely on as a single source of truth. The consultancies and platforms that get those three right will keep winning the next decade of work; the ones that do not will compete on price for shrinking PDF engagements.

Frequently Asked Questions

Run pentests with portals, SLAs, and retests built in

SecPortal gives security consultancies engagement management, findings tracking with CVSS scoring, AI-assisted reports, and a branded client portal on a subdomain you control. Close the delivery gap without building a platform. See pricing or start free.

Get Started Free

Sources

  1. NIST, SP 800-115: Technical Guide to Information Security Testing and Assessment, September 2008
  2. PCI Security Standards Council, PCI DSS v4.0.1 Requirement 11.4 (Penetration Testing)
  3. CISA, Known Exploited Vulnerabilities Catalog
  4. CISA, Stakeholder-Specific Vulnerability Categorization (SSVC)
  5. Verizon Business, 2025 Data Breach Investigations Report
  6. OWASP, Web Security Testing Guide (WSTG)
  7. NIST, Cybersecurity Framework (CSF) 2.0
  8. UK National Cyber Security Centre, Vulnerability Management Guidance
  9. SecPortal, Engagement Management Feature
  10. SecPortal, Findings & Vulnerability Management
  11. SecPortal, Branded Client Portal for Security Teams
  12. SecPortal, AI-Powered Security Reports