Pentest evidence management
an audit-ready record per finding
Capture, structure, and retain pentest evidence on the same record the finding lives on. Requests, responses, screenshots, payloads, and proof-of-concept notes stay attached to one engagement timeline. Retest evidence pairs to the original finding so closure is defensible, and the full record exports cleanly when an auditor or client asks for it.
No credit card required. Free plan available forever.
Pentest evidence management without the shared folder mess
Most pentest practices treat evidence as a side effect of testing rather than as the record itself. Screenshots end up in a folder named by client and date, request and response captures live in scratch files on a tester laptop, payloads sit in chat scrollback, and tool output ages out of an export folder a quarter later. The report ships, and a few weeks afterwards an auditor asks "which evidence supports finding F-1247?" and the team rebuilds the answer from memory. The evidence is real, the testing was rigorous, and the audit trail still cannot stand on its own.
SecPortal models evidence on the finding record itself rather than in a parallel filing system. Every finding carries the request and response, screenshots, payloads, tool output, and affected-asset details that prove it. Status changes, severity edits, and reviewer sign-off are timestamped automatically. Retest evidence pairs to the original finding so closure is one continuous record. The full engagement exports as a structured package the client, the engagement lead, and the auditor can all read without rebuilding the story. The result is evidence that survives delivery and an audit trail that is the workflow rather than a clean-up exercise.
Six pillars of pentest evidence management that holds up under audit
Evidence attached to the finding
Requests, responses, screenshots, payloads, and proof-of-concept notes attach to the finding they support, not to a shared folder. The evidence carries the same severity, CVSS 3.1 vector, and CWE mapping as the finding it backs, so the proof and the claim never drift apart.
Audit trail by default
Every finding action (status change, evidence upload, severity edit, review comment) is timestamped and tied to the user who made it. The audit trail is a side effect of the workflow, not an extra step, so the record is complete without a tester having to remember to log entries by hand.
Encrypted credential storage
Authentication credentials needed to reproduce findings (cookies, bearer tokens, basic, form login) are stored encrypted at rest with AES-256-GCM and live next to the engagement they belong to. Reproduction is repeatable instead of being blocked by a missing password or a stale chat message.
Reviewer sign-off on evidence
Engagement leads and peer reviewers see the same evidence the tester captured, comment inline, and require additional artefacts before sign-off. The review history stays attached to the finding so the audit record shows who approved what and when, not which Slack thread had the latest version.
Retest evidence pairing
Verification evidence (proof of fix, residual risk notes, regression observations) attaches to the same record as the original finding. The closure timeline shows original evidence, fix description, retest evidence, and final outcome on one record so nothing has to be reconstructed from memory.
Audit-ready export
Export the engagement to a structured artefact that holds findings, evidence files, severity and CVSS detail, status timeline, owner assignments, and retest outcomes. The export answers SOC 2, ISO 27001, PCI DSS, NIS2, and CISA BOD 20-01 evidence questions without rebuilding the story from PDFs.
What evidence to capture per finding
Six artefact types cover most engagement evidence needs. The point is not to capture all six on every finding; it is to make the right combination obvious so testers stop improvising on each engagement and reviewers stop having to ask for the same missing artefact across half the findings.
| Evidence type | Why it belongs on the finding |
|---|---|
| HTTP request and response | Captures the exact traffic that triggered the finding so a reviewer or auditor can reproduce the issue. Store the request line, headers, body, response status, and response body verbatim and redact secrets before sharing. |
| Screenshot | Visual confirmation of impact: the response page that revealed the data, the admin panel reached without authentication, the cross-site script firing in the victim browser. Pair with the underlying request and the affected URL so the screenshot is not a standalone claim. |
| Payload | The injection string, fuzz input, or business-logic sequence that produced the result. Store sanitised so the payload is not a live exploit, but detailed enough that the fix can be validated against the same input class on retest. |
| Console or terminal output | Stack traces, command output, error messages, and shell artefacts that accompany the finding. Useful for command injection, path traversal, deserialisation, and SSRF findings where the impact is visible in tool output rather than a browser. |
| Tool output | Scanner exports (Nessus, Burp Suite), SAST reports, SCA reports, and custom tooling artefacts. Treat tool output as supporting evidence behind a tester-validated finding rather than as a finding on its own, because raw tool output without verification fails on retest. |
| Affected asset and identifier | The hostname, URL, parameter, repository path, file, line number, or API endpoint the finding applies to. Anchors the evidence to a real target so the retest knows exactly where to verify and the report can map findings to in-scope assets cleanly. |
Where pentest evidence management usually breaks
Five failure modes show up across most pentest practices that have not yet pulled evidence onto the finding record. Each one is a structural problem with a structural fix.
Evidence in shared folders, away from the finding
When a screenshot lives in a shared drive folder named by client or by date, the link between proof and claim breaks the moment the report is delivered. The retester three months later cannot find the original artefact and the auditor asking which evidence supports which finding gets a vague answer. Attaching evidence to the finding record itself prevents this on every engagement.
Reports rebuilt from chat scrollback
If the only record of how a finding was reproduced lives in a Slack thread, the report writer rebuilds the story from memory and the audit trail is whatever survives in the chat history. A finding-level evidence attachment with reviewer sign-off makes the report a snapshot of the live record rather than a reconstruction.
No paper trail for severity changes
Severity often gets debated, then changed, with no record of the rationale. The next reviewer, the client, or the auditor reads the final severity without seeing the calibration that produced it. Recording severity changes against the finding record, with the reviewer named and the timestamp captured, makes the calibration defensible.
Retest reproduced from memory
Retesting works only when the original reproduction steps and evidence are available. Reconstructing them from a delivered PDF loses the request, the payload, and the affected parameter. Pairing retest evidence to the original finding closes this gap so the verification timeline is one record from open to closed.
Sensitive evidence handled like ordinary attachments
Findings often capture data that should not propagate (PII, tokens, internal hostnames, customer records). Treating evidence like a casual file on a personal drive creates regulatory exposure. Encrypted storage, role-based access, and a deliberate redaction pass before client delivery make evidence handling defensible.
What an audit-ready evidence pack contains
The export bundles findings, evidence, status history, owner assignments, reviewer sign-off, retest outcomes, and AI-generated reports into a structured artefact that answers the questions auditors actually ask. Treat the bundle as the audit deliverable rather than a separate evidence document the team assembles by hand at year end.
- Findings list with severity, CVSS 3.1 vector, CWE mapping, and OWASP category per finding
- Evidence files attached to each finding (request and response, screenshot, payload, console output, tool exports)
- Status timeline per finding (open, in progress, awaiting retest, resolved, accepted risk) with timestamps and acting users
- Owner assignments and remediation SLA targets by severity
- Reviewer sign-off history per finding (engagement lead, peer reviewer, approval timestamp)
- Retest entries paired to the original finding with verification evidence and final outcome
- Closure timestamp and SLA performance per finding for ageing and trend analysis
- AI-generated executive summary, technical writeup, and remediation roadmap drawn from the live findings
One evidence record, five different views
Pentest evidence management is multi-stakeholder by default. The tester, the engagement lead, the peer reviewer, the client, and the external auditor each need a different view of the same record. SecPortal serves all five from the same finding and engagement entries so the evidence stays consistent and the views stay role-appropriate.
| Role | What they see |
|---|---|
| Tester | Opens the finding, attaches the request and response, screenshot, payload, and console output to the same record, sets severity with a CVSS 3.1 vector, and submits for review. No separate folder, no separate spreadsheet, no separate report draft. |
| Engagement lead | Reviews evidence on the finding, comments inline, requests additional artefacts where the evidence is thin, and approves the finding for client delivery. The approval history stays on the finding as part of the audit record. |
| Peer reviewer | Performs an independent check of the evidence and the severity calibration. Comments and approval timestamps are captured against the finding so the second pair of eyes is documented rather than implicit. |
| Client | Sees evidence in the branded portal alongside the finding, comments inline with clarifying questions or accepted-risk decisions, attaches fix evidence on retest, and gets a complete record they can hand to their own auditors without rebuilding the story. |
| Auditor | Receives a structured export with findings, evidence, status timeline, owner assignments, and retest outcomes. The same export answers SOC 2, ISO 27001, PCI DSS, NIS2, and CISA BOD 20-01 evidence questions without a separate audit assembly exercise. |
For pentest firms, internal teams, and disclosure programs
The shape of the evidence problem changes with the type of engagement, but the underlying workflow is the same. The platform serves all three of the common shapes from the same workspace.
Pentest firms
Multi-tester practice delivering several engagements per quarter. Standardise evidence capture across testers so the engagement lead does not have to ask the same missing artefact question on every review and the report ships from a complete record.
Internal security teams
In-house teams running their own assessments and consuming external pentests. Keep evidence on the finding so SOC 2, ISO 27001, and PCI DSS auditors get a clean export per engagement and engineering tickets stop drifting from the source-of-truth finding record.
Disclosure programs
Coordinated vulnerability disclosure programs need a defensible record of how each researcher submission was triaged, reproduced, fixed, and disclosed. Evidence management on the engagement record gives the program manager the artefact set they need without rebuilding the story from email.
How evidence management connects to the rest of the platform
Evidence management is not a separate module bolted onto the platform. It sits on top of findings management for the finding record, inside engagement management for scope and team assignment, ships through the branded client portal for the client view, captures reproduction credentials through authenticated scanning, and feeds AI reports with the live record so the report is a snapshot of the evidence rather than a reconstruction. After delivery, the engagement flows into the remediation tracking workflow and the pentest retesting use case so the same evidence supports closure verification on the same record.
Compliance evidence by default
The same evidence pack maps cleanly to ISO 27001, SOC 2, PCI DSS, and NIS2 evidence requirements without re-keying. Auditors get the same export structure each cycle, which usually shortens audit prep from weeks to hours.
Methodology and pricing context
Anchor evidence handling against the Penetration Testing Execution Standard and the NIST SP 800-115 technical guide for methodology, and the research on severity calibration and aging pentest findings for the calibration and ageing context the evidence record supports.
Pair evidence management with the deliverable workflow
Evidence is the substrate, but the engagement is delivered as a report and a remediation program. Pair this workflow with the long-form guides that anchor the deliverable side: how to write a pentest report, the penetration testing report template, the executive summary guide, and the playbook for how to retest vulnerabilities. Each one assumes the evidence record described on this page already exists; this workflow is what makes those guides operational rather than aspirational.
Operational pairing checklist
Use the evidence record alongside the pentest project management workflow for delivery, pentest report delivery for handover, pentest quality assurance for the structured QA pass that gates each deliverable, and pentest client onboarding for the intake side. The same finding and engagement records carry across all five workflows, so evidence captured during testing flows directly into QA, delivery, retest, and closure without re-keying.
Evidence management is one of those workflows that looks invisible from the outside and turns out to be the single biggest source of leverage when an auditor, a client, or a retest asks a hard question. Get it right and every finding stands on its own evidence record, every retest pairs cleanly to the original, and every audit cycle produces an export rather than a scramble. Get it wrong and the testing was rigorous but the audit trail cannot prove it. The goal of this workflow is to make the defensible answer the path of least resistance for everyone touching the engagement.
For the storage shape underneath the workflow (engagement-scoped uploads, signed URL downloads, plan-aware capacity, and the upload and delete events that flow into the activity log), see the document management feature page. Evidence files inherit the same RBAC and tenancy model the rest of the engagement record runs on.
Frequently asked questions about pentest evidence management
What is pentest evidence management?
Pentest evidence management is the discipline of capturing, structuring, retaining, and exporting the proof behind every finding so the engagement record is defensible long after the report is delivered. It covers what evidence to capture (request and response, screenshots, payloads, tool output, affected asset), where it lives (attached to the finding rather than to a shared folder), how it is reviewed (engagement lead and peer reviewer sign-off), how it is retained (encrypted, role-controlled, retention-aware), and how it is exported (audit-ready package per engagement). SecPortal models all of this on the finding and engagement records so evidence handling is a side effect of the workflow rather than a separate clean-up task.
How is evidence management different from a shared folder of screenshots?
A shared folder cannot answer the audit question "which evidence supports finding F-1247". The evidence and the finding live in separate systems, and the link between them is implicit at best. Evidence management on a finding record makes the link explicit: every artefact is attached to the finding it backs, the severity and CVSS vector and CWE mapping carry through, and the audit trail captures who uploaded the evidence, when, and what changed afterwards. Shared folders work until the first auditor question about a specific finding and then they break.
What evidence should be captured per finding?
At minimum: the HTTP request and response (or equivalent traffic for non-HTTP findings), at least one screenshot showing impact, the payload or input that produced the result, any relevant console or terminal output, supporting tool output where applicable (Nessus, Burp Suite, SAST, SCA), and the affected asset identifier (hostname, URL, parameter, repository path, file and line, API endpoint). Severity, CVSS 3.1 vector, and CWE mapping accompany the evidence so the proof and the claim are aligned.
How is sensitive evidence handled?
Sensitive evidence (PII, customer records, internal hostnames, tokens, secrets) is stored encrypted at rest and access is role-controlled inside the workspace. Authentication credentials needed to reproduce findings are stored encrypted at rest with AES-256-GCM. A deliberate redaction pass before client delivery removes secrets and unrelated personal data from the artefacts the client portal exposes. Treating evidence as ordinary attachments creates regulatory exposure that the workflow should prevent rather than rely on individual testers to remember.
How does evidence support the retest?
A retest only works if the original reproduction steps and evidence are available. SecPortal pairs the retest to the original finding so the request and response, payload, affected asset, and screenshots carry forward. The tester running the retest reproduces the original attack path, attaches the verification evidence to the same record, and marks the finding verified fixed, partially fixed, not fixed, or regressed. The audit record then shows scope, original evidence, fix description, retest evidence, and final outcome on a single timeline rather than across two disconnected reports.
How does the export work for SOC 2, ISO 27001, PCI DSS, and NIS2 audits?
Each engagement exports a structured package that holds the findings list with severity and CVSS detail, the evidence attached to each finding, the status timeline with acting users and timestamps, the owner assignments and SLA targets, the reviewer sign-off history, the retest pairs and outcomes, and the closure record. That bundle is what SOC 2, ISO 27001, PCI DSS, NIS2, and CISA BOD 20-01 reviewers ask for under their respective control families, so the export is the audit artefact rather than a separate document the team has to assemble.
How does evidence management fit with bug bounty and vulnerability disclosure programs?
Bug bounty and coordinated vulnerability disclosure (CVD) reports arrive as researcher submissions that need triage, evidence capture, and a defensible disclosure history. SecPortal converts each submission into a structured engagement, attaches researcher-supplied evidence to the finding, captures triage and reproduction artefacts, and pairs remediation to the original report. The evidence record then supports the public disclosure conversation and the auditor question about how each report was handled, both of which are otherwise hard to answer from an inbox.
How it works in SecPortal
A streamlined workflow from start to finish.
Capture evidence inside the finding
When a tester opens a finding, the evidence (HTTP request and response, screenshot, payload, console output, observed behaviour) is attached directly to that finding rather than dropped into a shared folder. The evidence carries the same severity, CVSS 3.1 vector, and CWE mapping as the finding it supports, so the proof and the claim never drift apart.
Stamp every action with who and when
Every change to a finding is timestamped and tied to the user who made it. Status transitions, evidence uploads, severity changes, and review comments are recorded automatically rather than relying on a tester to write a manual log entry. The audit trail is a side effect of working the engagement, not an extra task at the end.
Review evidence before it reaches the client
Engagement leads and peer reviewers see the same evidence the tester captured, comment inline on the finding, and approve before the report is published. Reviewers can require additional evidence (a second screenshot, a curl reproduction, or a sanitised payload) before sign-off. The review history stays attached to the finding as part of the audit record.
Pair retest evidence to the original finding
When the retest runs, the verification evidence (proof of fix, residual risk notes, regression observations) attaches to the same record as the original finding. The closure timeline shows the original evidence, the fix description, the retest evidence, and the final outcome on one continuous record so nothing has to be reconstructed from memory.
Export an audit-ready evidence pack
Export the engagement to a structured artefact that includes findings, evidence files, severity and CVSS detail, status timeline, owner assignments, and retest outcomes. The same export answers a SOC 2, ISO 27001, PCI DSS, or NIS2 auditor question without rebuilding the story from PDFs and Slack threads.
Treat evidence as the record, not an afterthought
One audit-ready timeline per finding. Evidence, retest, and closure on the same engagement. Start free.
No credit card required. Free plan available forever.