Pentest quality assurance
a structured QA pass before any report ships
Run pentest QA on the live engagement record rather than on a Word draft passed around in chat. Reviewers comment against findings, severity calls are challenged with the evidence visible, remediation guidance is verified for developer-readiness, and engagement lead sign-off is captured with a timestamp on the engagement before anything reaches the client portal.
No credit card required. Free plan available forever.
Gate every report behind a structured QA pass
A pentest report ships only as well as the QA pass that gated it. Done well, an independent peer reviewer walks every finding against documented criteria, comments live on the affected finding, severity calls are calibrated against the CVSS 3.1 vector and the evidence, remediation guidance is rewritten until it is developer-ready, and the engagement lead signs off with a timestamp on the engagement record. Done badly, QA is a chat thread that vanishes the day the report ships, severity calls are guesses, and the firm cannot answer an auditor asking who reviewed the deliverable.
SecPortal models pentest quality assurance as a first-class workflow on the engagement record. Reviewer comments are captured inline on findings, severity rationale lives next to the finding it changed, sign-off is named and timestamped against the approved report version, and the QA history stays attached for CREST, ISO 27001, and client procurement audits. The deliverable that reaches the branded client portal is the version the engagement lead approved, not the version that the deadline forced.
Six dimensions a peer reviewer checks
A complete QA pass is more than a typo sweep. The reviewer walks each finding through six dimensions that decide whether a deliverable is defensible. Each dimension fails silently at QA and loudly at delivery, so the discipline is to resolve them before the report leaves the engagement record.
Severity calibration
Every finding is checked against its CVSS 3.1 vector and the environment context. The reviewer challenges the call when the attack vector, attack complexity, privileges required, scope, and impact metrics do not line up with the evidence on the finding. A draft severity inflated by tooling defaults or a default risk matrix gets corrected before the executive summary inherits the wrong headline number.
Evidence completeness
Reproduction steps are concrete, payloads are sanitised, screenshots show the relevant state rather than the whole desktop, and request and response captures are attached on the finding. The reviewer reads each entry as a developer would, and asks for a second screenshot or a curl reproduction wherever the evidence stops short of letting an engineer reproduce the issue without messaging the tester.
Remediation specificity
Remediation guidance is verified for developer-readiness. Generic copy ("apply input validation", "follow least privilege") is rewritten to point at the specific configuration, library, or code path that needs to change. The 300+ remediation template library is the starting point, not the ceiling. The reviewer trims, sharpens, and adds the environment-specific note that turns guidance into an action.
Finding deduplication
The reviewer checks for finding-level duplication that survived the import or the tester pass. The same TLS misconfiguration filed twice under two titles, or two findings against the same parameter under different CWEs, is consolidated before delivery. Duplicate-prone patterns get cross-checked against the workspace history so the deliverable counts each issue once.
CWE and OWASP alignment
Each finding is checked against its CWE ID and, where applicable, its OWASP Top 10 or OWASP ASVS category. Mismatched taxonomy is corrected so downstream filters, remediation roadmaps, and retest scoping operate on a consistent vocabulary. Compliance evidence requests against ISO 27001, SOC 2, PCI DSS, or CREST audits expect this alignment to hold.
Scope and methodology fit
The reviewer confirms each finding sits inside the agreed scope, references the correct asset, and is consistent with the methodology section of the report. Out-of-scope findings either get repositioned (informational, accepted-risk, or follow-up advisory) or removed. Methodology drift between what was agreed and what got tested is flagged for the engagement lead before sign-off.
Where pentest QA usually breaks down
Six failure modes recur whenever QA is treated as a courtesy review rather than a gate. Each one is invisible at sign-off and visible at the next audit, the next regression, or the next client procurement review.
QA happens in a parallel chat thread
Reviewer comments live in Slack, Teams, or email rather than on the finding. Three weeks later, nobody can answer why severity was downgraded, why a finding was dropped, or which version of the report the engagement lead actually approved. The audit trail starts at the published PDF rather than at the QA conversation that shaped it.
No named reviewer recorded
The engagement closes with the report shipped, but the engagement record does not capture who reviewed it. CREST, ISO 27001, and client procurement audits all expect a named technical reviewer separate from the test author. Without it, the firm cannot demonstrate independent QA when an auditor asks.
Severity is calibrated by reviewer guess
A reviewer rebases severity on intuition rather than working through the CVSS 3.1 vector against the evidence. The first call is replaced with a different first call. The next reviewer disagrees again. Severity churn between reviewers signals that the team is calibrating against vibes rather than against the documented metrics.
Reviewer signs off on a stale draft
The reviewer signs off on Tuesday, the tester edits three findings on Wednesday, and the report ships on Thursday with the original sign-off attached. The approval points at the wrong version. Tying sign-off to the engagement state at approval time, with a state freeze on findings until publication, prevents this drift.
QA is a single linear pass
When QA is one pass through the document with no loop back to the tester, comments either get applied silently (the reviewer never sees the resolution) or get ignored (the comment is closed because the deadline arrived). A multi-pass loop with explicit comment resolution status keeps both sides accountable.
Evidence-thin findings ship anyway
A finding without a reproduction step, without a request and response capture, or without a clean screenshot still gets shipped because the deadline pressure overruled the reviewer flag. Three months later the client cannot reproduce, the regression is missed, and the firm is on the hook for a finding it cannot defend.
Roles in a pentest QA review
QA is a multi-role workflow with explicit separation of duties. Each role has a clear responsibility on the engagement record and the audit trail captures who held which role and when.
Test author
The tester who logged the findings during the engagement. Owns reproduction evidence, severity drafts, and the first pass at remediation guidance. Resolves QA comments by editing the finding directly, attaches additional evidence the reviewer asks for, and pushes back with a rationale on the finding when severity calls are contested.
Peer reviewer
A senior tester who did not run the engagement. Walks every finding for the six review dimensions, leaves comments inline, and either resolves or escalates each one with the engagement lead. Independent of the test author so the firm can demonstrate separation of duties to clients and auditors.
Engagement lead
Owns the deliverable, signs off when the QA bar is met, and records the approval against the engagement. Adjudicates open comments where the test author and the peer reviewer disagree, captures the decision rationale on the affected finding, and confirms the executive summary headline reconciles with the technical findings count.
Compliance reviewer
For regulated engagements (CREST, financial sector TLPT, public-sector contracts), an additional compliance reviewer confirms methodology, scope, and evidence handling line up with the framework. SecPortal records this review as a separate role on the engagement so the audit trail distinguishes technical QA from compliance QA.
QA criteria each finding has to clear
The QA criteria are documented and consistent across reviewers so a finding cleared by one reviewer would also clear with another. Consistency between reviewers is the difference between a QA process and a series of opinions.
| Criterion | What good looks like |
|---|---|
| Severity matches the CVSS 3.1 vector | The headline severity (Critical, High, Medium, Low, Informational) reflects the calculated CVSS score range and the environment-aware adjustments applied. Reviewers walk the vector metric by metric against the evidence on the finding rather than agreeing with the auto-imported number from a scanner default. |
| Evidence supports reproduction by an outsider | Each finding contains enough detail (steps, payload, request, response, screenshot) for someone who did not run the test to reproduce the issue. The reviewer reads as a stranger would and asks for the missing step where the chain breaks. |
| Remediation is specific to the environment | Remediation guidance points at the configuration, library version, code path, framework feature, or rule the team needs to change. Generic copy is rewritten so the developer who picks up the finding does not have to research the actual fix. |
| CWE and taxonomy are consistent | CWE IDs are present and accurate, OWASP Top 10 and OWASP ASVS categories are aligned where applicable, and the finding is filed under the right vulnerability category. Mismatched taxonomy is the source of most filtering and reporting drift later. |
| Scope and methodology fit | The finding sits inside the agreed scope, references the correct asset, and is consistent with the methodology section. Out-of-scope or methodology-drift findings are repositioned or dropped before they reach the client. |
| Executive summary reconciles with the technical findings | The headline counts in the executive summary (findings by severity, top three issues, risk posture statement) reconcile with the technical findings list. Drift between the executive summary and the body is the most common audit catch and the most preventable one. |
Reviewer checklist before sign-off
Before the engagement lead records sign-off, the reviewer runs through a short checklist. Each item takes minutes; missing any one of them is the source of the failure modes above.
- Reviewer is named on the engagement and is not the same person who authored the test.
- QA review opens against the engagement, not against an exported PDF.
- Each finding is checked for CVSS 3.1 vector accuracy, evidence completeness, reproduction clarity, remediation specificity, and CWE alignment.
- Reviewer comments land on the affected finding, not in a side channel, with severity-level tags where the comment changes the call.
- Severity adjustments carry the rationale on the finding so the audit trail explains the change without the original reviewer being available.
- Generic remediation copy is rewritten to reference the specific configuration, library, package, or code path that needs to change.
- Duplicate findings are consolidated; near-duplicate findings get an explicit decision on the record.
- Out-of-scope findings are either repositioned (informational, accepted-risk, advisory) or removed, with the rationale captured.
- Executive summary numbers are reconciled against the technical findings count by severity before sign-off.
- Engagement lead sign-off is recorded against the engagement at the version of the report at approval time.
- Findings are state-frozen between sign-off and publication so no edits land on an approved deliverable.
How pentest QA looks in SecPortal
QA is one workflow stitched into three feature surfaces: the engagement record, findings management, and team management with role-based access. Reviewer activity, sign-off, and approved report version all live on the same engagement, so the QA pass and the audit trail share a single source of truth.
Open the QA review
Move the engagement into an In Review state, name the reviewer using team management with role-based access, and pull in the QA criteria template. The state change is the gate, recorded on the audit trail.
Comment on findings
Reviewer comments live inline on the affected finding through findings management. Severity adjustments, evidence requests, and remediation rewrites are captured next to the data they relate to rather than in a parallel chat thread that drifts.
Sign off and publish
The engagement lead records sign-off against the engagement at the approved report version. Findings are state-frozen, then the report publishes through the branded client portal. The QA history stays attached for the audit trail.
Where QA sits across the engagement lifecycle
QA is the gate between drafting and delivery. It composes with the rest of the engagement lifecycle on the same record so the work that goes in once does not have to be redone at every stage.
Upstream and downstream
QA takes the output of scanner result triage and pentest evidence management once the test is closed, and feeds the signed deliverable into pentest report delivery and retesting. When a delivered finding gets contested, pentest finding dispute resolution picks up the same record and runs an independent re-evaluation rather than letting the pushback live in email.
Project context
QA is one phase inside pentest project management. The reviewer reads each finding against the methodology categories, schedule, and entry and exit criteria captured in the engagement test plan, so methodology drift and coverage gaps surface against the same plan the team was scoped on. The engagement lead schedules QA against the delivery deadline, names the reviewer, and tracks comment-resolution progress alongside scope, evidence, and findings on the same record.
Pair the workflow with the long-form guides
QA is operational; the surrounding guides explain the trade-offs that show up at review time. Pair this workflow with the writeup on how to write a pentest report for the writing discipline the reviewer is checking against, the pentest executive summary guide for the reconciliation pass between executive and technical content, severity calibration research for the data behind the calibration step, CVSS scoring explained for the vector vocabulary, and the finding triage during a pentest guide for the upstream workflow that produces the catalogue QA reviews.
Buyer and operator pairing
A structured QA pass is the workflow pentest firms, security consultants, and internal security teams run when the deliverable has to clear an independent reviewer before it reaches a client, a regulator, or an auditor. The framework references that mandate this gate include CREST for accredited firms, ISO 27001 for segregation of duties, and PTES for methodology consistency.
What good pentest QA feels like
Defensible deliverable
Severity calls are documented against the CVSS vector, evidence supports outsider reproduction, remediation is developer-ready, and the executive summary reconciles with the technical findings. When a client asks why a finding was rated High, the answer is on the finding rather than in someone is memory.
Audit-ready trail
Named reviewer, comment history, severity rationale, sign-off timestamp, and approved report version all live on the engagement record. CREST, ISO 27001, and client procurement audit answers come from the record rather than from a reconstruction.
Pentest quality assurance is the workflow that decides whether a deliverable is the version the engagement lead approved or the version that the deadline forced. Get it right and the report and the audit trail line up; get it wrong and the firm ships a report nobody can defend three months later. The goal of this workflow is to make the structured QA pass the path of least resistance for any pentest team that has to gate deliverables behind a named reviewer.
Frequently asked questions about pentest quality assurance
What is pentest quality assurance?
Pentest quality assurance is the structured review pass that gates whether a finished penetration test report is good enough to ship. A peer reviewer who did not run the test walks every finding for severity calibration, evidence completeness, remediation specificity, deduplication, taxonomy alignment, and scope fit. The engagement lead signs off on the deliverable when the bar is met. SecPortal models pentest QA as a workflow on the engagement record so reviewer comments live next to the findings, sign-off is timestamped against a named approver, and the audit trail captures who approved what version of the report.
How is pentest QA different from finding triage?
Triage happens during the test, when findings are first logged. The tester captures evidence, drafts severity, and writes initial remediation guidance. QA happens after the test is closed, when an independent peer reviewer walks the finished catalogue against the QA criteria. Triage decides what gets logged; QA decides what gets shipped. Both workflows use the same engagement record but with different reviewers, different criteria, and different gates.
Who should run pentest QA?
A senior peer reviewer who did not author the test, plus an engagement lead for sign-off. The peer reviewer is independent of the test author so the firm can demonstrate separation of duties to clients, procurement reviewers, and auditors. CREST, ISO 27001, and most enterprise procurement reviews expect this independence. For regulated engagements (financial sector TLPT, public sector contracts), an additional compliance reviewer confirms methodology, scope, and evidence handling against the framework.
What does a pentest QA reviewer check?
Six dimensions: severity calibration against the CVSS 3.1 vector and the environment context, evidence completeness for outsider reproduction, remediation specificity for developer-readiness, finding deduplication across the catalogue, CWE and OWASP taxonomy alignment, and scope and methodology fit. The reviewer reads each finding as a stranger would, asks for missing evidence on the finding directly, and either resolves or escalates each comment with the engagement lead before sign-off.
How should QA comments be tracked?
On the affected finding, not in a side channel. SecPortal captures reviewer comments inline on the finding so the conversation lives next to the data. Comments carry a status (open, resolved, accepted) and a rationale field for severity adjustments and accepted-risk calls. The QA history stays attached to the engagement record so an auditor, client, or insurer can reconstruct exactly how the deliverable was reviewed without anyone digging through chat history.
What does sign-off look like?
The engagement lead reviews the QA history, confirms every reviewer comment is either resolved or explicitly accepted with rationale, and records sign-off against the engagement. The approval is timestamped, names the approver, and ties to the version of the report at sign-off. Findings are state-frozen between sign-off and publication so no edits land on an approved deliverable. When the report publishes through the branded client portal, the audit trail shows exactly who approved what state of the report.
How does pentest QA fit with the rest of the engagement lifecycle?
QA sits between report drafting and delivery. The test produces findings on the engagement record, the AI report generation drafts the executive summary, technical report, and remediation roadmap, the QA pass gates the draft against the QA criteria, sign-off is recorded, and the report publishes through the branded client portal. Retesting opens from the original findings after delivery and the QA history stays attached so the next reviewer cycle starts with the prior context intact.
How does QA support CREST and ISO 27001 audits?
CREST-accredited firms must demonstrate a documented QA process with a named technical reviewer separate from the test author. ISO 27001 controls around segregation of duties, documented procedures, and audit trails apply to the same workflow. SecPortal records the named reviewer, the comment history, the sign-off timestamp, and the approved report version against the engagement so the firm can answer audit questions with the record rather than with a memory of how the engagement was reviewed.
How it works in SecPortal
A streamlined workflow from start to finish.
Open a QA review against the engagement
Once the test is closed and the draft report is generated from the live findings, the engagement lead opens a QA review. The reviewer is named, the QA criteria are pulled from the standard QA checklist, and the engagement state moves to In Review so the audit trail records the gate without a separate tracker.
Review every finding against the QA criteria
The peer reviewer walks each finding for severity calibration against the CVSS 3.1 vector, evidence completeness (request, response, screenshot, reproduction steps), remediation specificity, and CWE alignment. Comments land on the finding directly so the conversation lives next to the data rather than in a side channel that nobody can reconstruct later.
Resolve comments and rerun affected sections
The tester answers each comment on the finding it relates to. Severity adjustments, additional evidence captures, and remediation rewrites are applied in place. The QA pass closes only when every comment is either resolved or explicitly accepted by the engagement lead, with the rationale captured on the finding.
Engagement lead sign-off with named approver
The engagement lead reviews the QA history, confirms the bar is met, and records sign-off against the engagement. The approval is timestamped, named, and tied to the version of the report at sign-off so the audit trail captures exactly who approved what state of the deliverable.
Publish the signed report through the branded portal
After sign-off, the report is published through the branded client portal on your subdomain. The QA history stays attached to the engagement record, so when an auditor, client, or insurer asks how the deliverable was reviewed, the answer is on the record rather than in a memory of a Slack thread.
Gate every report behind a structured QA pass
Reviewer comments on findings, named sign-off on the engagement, audit trail intact. Start free.
No credit card required. Free plan available forever.