Pentest retesting
as a tracked deliverable, not an afterthought
Run retests as a structured part of the engagement. Pair every retest to the original finding, capture fix evidence inside the same record, log regressions when fixes drift, and produce a retest report stakeholders can audit. Stop retesting from spreadsheets.
No credit card required. Free plan available forever.
Pentest retesting as a tracked, billable deliverable
Retesting is the second-most consequential phase of a pentest engagement and the one most often run from a spreadsheet. The original report lands, criticals get fixed, the consultancy quietly retests over email, the client receives a paragraph in a follow-up PDF, and three months later nobody can answer the question every auditor asks: which findings were verified fixed, which are still open, and which regressed. The cause is usually the same. Retests are treated as an afterthought rather than a structured deliverable. This page covers what a retest workflow needs to do, how to scope and price it, and how to keep the verification record clean enough that close-out becomes a routine output rather than a scramble.
For the platform capability behaviour beneath this workflow (the FindingStatus lifecycle, separate resolved_at and verified_at timestamps, RBAC-gated transitions, and the activity log audit trail), see the retesting workflows feature page. For the practical workflow guide, read the retest vulnerabilities playbook. For the wider close-out model, see the remediation tracking workflow. For severity continuity across retests, the severity calibration research explains how partial fixes and regressions move the calibrated rating without breaking the original trail. For the durability metric retest discipline produces, the vulnerability reopen rate research covers how closures fail at 30, 90, and 180 day lookback windows and why pairing reopens to the original finding identifier matters for honest reporting.
What a retest workflow needs to do
A retest is verification, not a new assessment
A retest verifies the original finding has been remediated. It is narrower than a fresh assessment: the tester reproduces the attack path, checks the proposed fix, and looks for partial fixes, regressions, and related variants. Treating retests as mini-pentests inflates scope and dilutes the deliverable.
Retests pair to the original finding
A retest result attached as a separate record loses the original scope, severity, and evidence. Pair the retest to the original finding so the timeline shows attack path, fix description, retest evidence, and final outcome in a single record. No reconstruction from PDFs months later.
Verification status is structured, not narrative
Each retested finding lands in one of four states: verified fixed, partially fixed, not fixed, or regressed. Free-text retest notes hide partial fixes inside paragraphs. A structured status drives reporting, billing, and the client conversation that follows.
Evidence belongs on the finding record
Retest evidence (proof of fix, captured request/response, configuration diff, screenshot) lives next to the original finding rather than in a separate evidence folder. The audit trail is then automatic and the close-out record is self-documenting.
Regressions reopen the original record
When a previously closed finding fails retest, the original record reopens with regression notes. Creating a new finding instead breaks severity continuity and hides the fact that a fix did not hold. The history of close, regression, and re-close belongs on one timeline.
Retests are billed as deliverables
Retest cycles are usually a separate line item, but most teams have no clean record of which findings the retest covered or how long it took. Tracking retests inside the engagement makes the deliverable self-documenting and gives every retest quote a defensible basis in real numbers.
Verification status: the four outcomes every retest produces
A retest result is one of four states. Free-text notes hide partial fixes inside paragraphs and make regression patterns invisible. A structured status drives the report, the billing line, and the client conversation that follows.
| Status | What it means and what happens next |
|---|---|
| Verified fixed | The original attack path no longer reproduces, the fix matches the remediation guidance from the report, and no related variants have been introduced. The finding closes with a verification timestamp and the SLA performance is recorded. |
| Partially fixed | The headline issue is mitigated but a sub-case, an edge condition, or a related variant still reproduces. The finding stays open with the residual scope captured. Severity may drop but the record stays attached so the next retest tests the residual rather than the whole chain. |
| Not fixed | The original attack path still reproduces. The finding stays in awaiting retest or moves back to open and the SLA clock resumes. The retest evidence remains on the record so the next iteration starts with a clean trail. |
| Regressed | A previously closed finding now reproduces. The original record reopens with regression notes and the original close-out timestamp is preserved alongside the new attempt. Regressions are the strongest signal that a code change reintroduced the issue, not that the original fix was wrong. |
What belongs in the retest scope of work
Retest scope is usually negotiated at the kickoff call and forgotten by the time the retest fires. Putting six items in the engagement record before the original report is delivered prevents the post-delivery scramble where the consultancy and the client rediscover that they had different expectations. When the buyer is ready to trigger the retest, the retest request form template captures the authorisation basis, the findings in scope, and the verification method per finding so the verification work runs on the same record the engagement opened on. A clean retest scope also feeds the broader pentest scope of work template so the engagement contract stays a single artefact rather than a stack of side agreements.
Findings in retest scope
A list of finding IDs from the original report. Only items on this list will be retested. Findings not in scope can be retested as out-of-scope work for an additional fee or rolled into the next assessment cycle.
Retest window
A defined date range during which the retest can be requested (commonly 30 to 90 days from report delivery). After the window closes, retest moves from included to chargeable.
Verification method per finding
How each finding will be retested: targeted exploit attempt, configuration verification, code review of the fix, or scanner re-run. Method choice drives cost and time, so it belongs in the contract not the kickoff call.
Number of retest passes included
Most contracts include one retest per finding. A second pass after a failed first retest is either chargeable or rolled into a follow-on engagement. Be explicit so failed retests do not erode margin.
Regression handling
What happens if a previously closed finding fails the retest. The default is the finding reopens, the SLA clock resumes, and a regression note is added. A retest plus regression budget should be agreed upfront for ongoing engagements.
Deliverable format
Retest report content: per-finding verification status, evidence attachments, executive summary of close-out posture, and updated remediation roadmap for items that did not pass. Delivered through the branded portal alongside the original report.
Pricing models that hold up under failed retests
Four pricing models cover most engagements. Pick the model that fits the cadence and the finding volume; mixing models inside one contract is where margin leaks. For a deeper treatment of pentest economics, see the research on pentest pricing models and the broader analysis of the pentest delivery gap.
Included retest
A single retest of all findings included in the original engagement fee, available within an agreed window (commonly 30 to 90 days). Most common for fixed-fee pentest engagements. Effective when remediation lead times are short and the buyer wants a clean close-out.
Day-rate retest
Retest billed at the same day rate as the original engagement, capped at a number of days. Effective when remediation timelines are long, scope drifts, or the original report had a high finding count where the retest cost is non-trivial.
Per-finding retest
A flat fee per finding retested. Predictable for both sides, gives the buyer a way to retest selectively, and incentivises the consultancy to be precise about original findings rather than padding the report. Effective for VDP and continuous engagement models.
Subscription retest
Retests are an included part of a recurring engagement (continuous pentest, PTaaS) on an unlimited or rate-limited basis. Effective for continuous programmes where the boundary between retest and next-cycle finding is intentionally blurred.
For pentest firms and security consultancies
Retests are the part of the engagement where margin and reputation collide. A retest run from a spreadsheet eats hours and produces an artefact the buyer cannot easily compare across vendors. A retest run inside the engagement record produces a deliverable that defends itself and feeds the next renewal conversation. For cybersecurity firms and security service providers, a retest workflow paired to the original finding produces five practical wins:
- Defensible pricing for retest packages from real engagement data rather than seat-of-pants estimates.
- A retest report that defends itself in any room because the verification status, evidence, and outcome live on the same record as the original finding.
- A close-out artefact that turns a retest from a chasing exercise into a billable deliverable with a clean handover.
- A regression history that surfaces patterns across engagements so the consultancy can recommend control changes rather than just fixing the latest break.
- A renewal conversation grounded in evidence, not in a PDF that aged out the morning after delivery.
For internal security teams and vCISOs
Internal teams and vCISO programmes are usually the buyer rather than the deliverer of retests, but the same workflow problem applies in mirror image. The vendor sends a retest update by email, engineering claims a fix is in, and a quarter later the auditor asks for the verification evidence. Pairing the vendor retest to the original finding inside a workspace the internal team controls keeps the audit trail complete without depending on the vendor to maintain it.
- Vendor accountability without a spreadsheet. Every retest pairs to the original finding so the audit trail is complete without manual reconciliation.
- A clear close-out posture across cycles, not a fresh assessment every time someone asks whether last quarter is done.
- Audit evidence that auditors actually accept: original finding, fix description, retest evidence, verification status, closure timestamp.
- A regression view that flags fixes that did not hold so engineering can change controls rather than re-fix the same issue.
- A retest scope that can be agreed at procurement rather than negotiated mid-engagement when timelines slip.
Procurement questions to ask about retests before signing
Retest scope is the cleanest signal a buyer can use to compare consultancies before signing. Asking the seven questions below in writing usually surfaces whether retesting is treated as a structured deliverable or an afterthought.
- How many retest passes are included per finding, and what triggers a chargeable additional pass?
- What is the retest window, and what happens to retest scope after the window closes?
- What verification method is used per finding (exploit replay, configuration check, code review, scanner re-run)?
- How are partial fixes and regressions classified, and how do they affect the SLA clock?
- Is the retest deliverable a separate report or a status update on the original report?
- How is retest evidence delivered: PDF appendix, portal record, or both?
- What is the cost model for retests outside the included scope (per-finding, day rate, subscription)?
How retesting connects to the rest of the platform
Retesting sits on top of findings management for severity continuity across the original and retest record, inside engagement management for scope and team assignment, and feeds AI reports with the verification data that makes the close-out artefact specific rather than generic. The retest delivers through the same branded client portal as the original report, so the buyer sees a single live engagement instead of two static PDFs to reconcile. For the broader continuous testing model where retest cadence is intentional rather than incidental, see the continuous penetration testing use case and the pentest project management workflow.
Retesting is one of those workflows that looks like a back-office detail and turns out to be the difference between a finding closed on paper and a finding closed in production. The discipline is operationally cheap once the verification status is structured, the retest pairs to the original record, and the deliverable lands in the same place as the original report. The result is a close-out artefact that defends itself in any room and a retest line that can be priced from real numbers rather than estimates.
Frequently asked questions about pentest retesting
What is a pentest retest, and how is it different from a re-engagement?
A pentest retest is a focused verification exercise that checks whether previously reported findings have been remediated. It is narrower than a re-engagement: the tester reproduces the original attack path, checks the proposed fix, and looks for partial fixes, regressions, and related variants. A re-engagement is a fresh assessment with new scope, new findings, and a new report. Retests are typically included in the original engagement fee or sold as a small follow-on; re-engagements are sold as new work.
How should a pentest firm price retests?
Four pricing models cover most engagements: an included retest within a fixed window, a day-rate retest capped at an agreed number of days, a per-finding flat fee, or a subscription retest as part of a continuous programme. Match the model to the engagement: fixed-fee pentests typically include one retest in the original price, continuous and PTaaS programmes treat retests as part of the subscription, and high-volume VDP work usually settles on per-finding flat fees. Make the model explicit in the statement of work so failed retests do not erode margin.
How long should a retest window be?
Thirty to ninety days from original report delivery is the common range. Shorter windows force quick remediation but leave little room for engineering teams that ship on slower cadences. Longer windows give the client room to remediate but stretch the consultancy commitment. The practical answer is to match the retest window to the buyer’s release cadence: 30 days for SaaS teams shipping weekly, 90 days for enterprise teams on quarterly release trains. Document the choice in the engagement record.
What happens when a retest finds the fix is only partially applied?
The finding stays open with a partially fixed status, the residual scope is captured on the original record, and severity may drop if the partial fix has materially reduced impact. Do not close the finding and open a new one; that breaks the calibration trail and hides the partial fix from the audit history. Pair the partial-fix retest to the original record so the next iteration tests the residual scope rather than the whole chain.
How are regressions handled when a previously closed finding reappears?
The original record reopens with a regression note, the original close-out timestamp is preserved alongside the new attempt, and the SLA clock resumes from the regression date. Regressions are a stronger signal than first-time findings because they show that a code or configuration change reintroduced an issue the team already knew about. A pattern of regressions across a programme often points to a control gap (no test coverage on the fix, no canary on the affected path) rather than a one-off mistake.
Should retests be delivered as a separate report or inside the original?
A retest report that lives separate from the original creates two artefacts the client has to reconcile. A retest delivered as a verification update on the original engagement keeps a single record per engagement with the original findings, the retest verification status per finding, and the updated remediation posture. SecPortal supports the second model: AI generates a retest report from the verification data and delivers it through the same branded portal as the original report so the client sees a single live engagement, not two static PDFs.
How does SecPortal support pentest retesting?
SecPortal pairs retests to original findings rather than treating them as new records, captures verification status (verified fixed, partially fixed, not fixed, regressed) per finding, retains evidence on the same record across the open and retest cycles, surfaces regression history when previously closed findings reappear, and produces AI-assisted retest reports from the live verification data. The retest deliverable lands in the same branded client portal as the original report, so close-out is part of the live engagement rather than a separate document.
How it works in SecPortal
A streamlined workflow from start to finish.
Agree the retest scope before kickoff
Capture the retest window, the findings in scope, the verification method per finding, and the contractual retest count inside the engagement record so both sides know what is included before the original report is delivered.
Open the retest against the original finding
A retest is opened from the original finding rather than created as a new record. Severity, CVSS vector, evidence, and remediation guidance carry over so the verification context is intact and nothing is reconstructed from memory.
Reproduce, verify, and capture evidence
Reproduce the original attack path, test the proposed fix, and check for partial fixes, regressions, and related variants. Attach request, response, screenshot, and proof-of-fix to the same record. Mark the verification status: verified fixed, partially fixed, not fixed, or regressed.
Pair the retest to the original record
The retest result lives next to the original finding so the timeline shows scope, original evidence, fix description, retest evidence, and final outcome on a single record. No drift between the report PDF and the platform state.
Deliver a retest report from real data
AI generates the retest report from the verification data: which findings were verified fixed, which remain open, and which regressed. Deliver through the branded client portal so the result becomes part of the live engagement record rather than a separate PDF.
Make every retest a tracked, billable deliverable
Pair retests to original findings, capture verification evidence in one record, and produce reports stakeholders can audit. Start free.
No credit card required. Free plan available forever.