Use Case

Third-party penetration test report intake
turn the PDF into a working backlog

Most internal security teams treat the third-party pentest report as a deliverable to file and a list of action items to forward. Run pentest report intake on the engagement record so each finding becomes a structured remediation work-item with calibrated severity, dedup against the existing scanner catalogue, named owner from the asset ownership map, SLA clock on assignment, retest evidence bound to the original finding, and a one-record audit chain. The PDF survives as audit evidence; the queue runs against findings.

No credit card required. Free plan available forever.

Turn the pentest PDF into a working backlog the same week it arrives

Most internal security teams treat the third-party pentest report as a deliverable to file and a list of action items to forward. The PDF lands in an inbox, a few critical findings get triaged into the existing tracker, the rest age inside the document, and the next audit asks for a chain that the team rebuilds from scratch. Severity stays at whatever the tester wrote, scanner findings that overlap with pentest findings track in parallel, and the retest engagement six weeks later opens a fresh set of records that do not bind to the original ones. The result is a vulnerability management programme that runs four parallel tracks for one underlying catalogue.

Third-party pentest report intake is the operational workflow that turns the PDF into a working backlog the same week it arrives. Open one engagement per pentest, extract each finding as a structured record with calibrated severity, dedupe against the existing scanner catalogue, assign named remediation owners from the asset ownership map, start the SLA clock, and bind the eventual retest evidence to the original finding. This page is the intake workflow internal security teams, AppSec teams, vulnerability management teams, and product security teams run alongside their broader programme cadence. For the per-finding evidence-quality contract, pair this page with the security finding evidence package workflow. For the closure cadence after intake, pair it with the remediation tracking workflow. For the firm-led retest cycle, pair it with the retesting workflow.

Six intake stages and what each one looks like in healthy and broken form

A defensible intake is six stages on the engagement record, not a one-off forward of the PDF to engineering. The split below is the operating starting point internal security teams, AppSec teams, and vulnerability management teams can run against to turn a third-party pentest report into a queue the rest of the programme can actually work.

StageHealthy postureDefault failure
Stage 1: Receive and validate the reportWhen the third-party pentest report arrives, the security team logs it on a dedicated engagement record with the report version, the testing window, the scope statement, the methodology, the tester names, the issuing firm, and the original PDF attached through document management. The report has a single home on the engagement record from the first day rather than living as a PDF in an inbox the next reader cannot find.The report lives in an email thread, a shared drive folder, or a personal download. The next reader (a remediation owner six weeks later, an auditor six months later, a new CISO eighteen months later) has to reconstruct what assessment produced what finding. Provenance is reconstructed from chat search rather than read from the engagement record.
Stage 2: Extract findings and normalise severityEach finding from the PDF becomes a structured record on the engagement: title, description, affected asset, reproduction steps, evidence (request and response, screenshot, artefact), the severity assigned by the tester, and the recalibrated CVSS 3.1 vector with environmental modifiers reasoned for the deployed environment. The 300+ finding template library backs the description so issue-class language stays consistent with the rest of the catalogue rather than reading as an isolated tester voice.The findings stay inside the PDF. The remediation queue reads from a tracker that does not match the report and severity stays at whatever the tester wrote. The same critical from one firm reads the same as a critical from another and the queue order reflects tester opinion rather than calibrated organisational impact.
Stage 3: Deduplicate against the existing catalogueImported findings run against the existing engagement and workspace finding history. A pentest finding that the internal scanner already raised is paired to the existing record rather than created twice, with the tester evidence appended to the open finding. Duplicates are flagged before they land and the dedup decision is recorded on the activity log so the audit lookback can read why two signals collapsed onto one record.Pentest findings are imported as fresh records without checking the existing catalogue. The scanner already had the same SQL injection open against the same endpoint, and the team now has two open findings for one underlying issue. Severity drift, parallel remediation work, and conflicting closure timelines all follow from the duplicate.
Stage 4: Route to remediation ownersEach finding is assigned to a named owner using asset ownership mapping. For findings inside an internal application, the named developer or team owner inherits from the asset record. For findings against a vendor system, the vendor contact is assigned and the cross-organisation hand-off uses the branded client portal so the vendor reads the finding on the same record the security team operates against. Routing is a state event on the finding rather than a Slack message that the audit lookback cannot reconstruct.Findings are dropped into a generic queue, the owner per finding is implicit, and remediation depends on a channel ping or a hallway conversation. Six weeks later, half the queue is unowned. Aging findings are invisible because the SLA clock cannot start without an owner.
Stage 5: Track remediation and schedule retestEach finding moves through the standard remediation states (open, in progress, awaiting retest, verified closed, accepted with exception). The retest pairs to the original finding rather than creating a new record, the retest evidence is the same reproduction steps the tester documented, and the closure trail reads as one record from open to verified closed. The retest engagement booked with the third-party firm reads against the same finding records so the firm verifies the same items the internal team tracked.Remediation status lives in a separate ticket system, the retest creates parallel records that do not bind to the original findings, and the closure history is split across the report PDF, the ticket, the retest PDF, and the security team spreadsheet. The audit lookback walks four artefacts to reconstruct one closure decision.
Stage 6: Preserve the report as audit evidenceThe original PDF, the tester attestation letter (if issued), the retest PDF, the remediation evidence per finding, and the activity log audit trail all live on the engagement record. CSV export of findings, control-mapped status, and the activity log is available when the auditor wants the trail in their own format. The pentest evidence chain is reproducible without rebuilding the artefact set from scratch every cycle.The PDF rotates out of the inbox, the attestation letter sits in the legal team folder, the retest PDF is in a different drive, and the remediation evidence per finding lives in a ticket export. The audit asks for one chain and the team assembles it from four places under deadline.

Six failure modes that quietly degrade pentest report intake

Pentest intake rarely fails at a single moment. It degrades in small accommodations: the PDF stays as a PDF, severity stays at the tester rating, dedup gets skipped, owners get assigned later, retest records do not bind to the originals, and the audit chain rebuilds itself every cycle. Each accommodation is reasonable in isolation; the cumulative effect is a queue where the pentest investment never reaches the closure record.

The PDF never becomes structured findings

The pentest report stays as a PDF in the document folder. The remediation queue tracks generic action items rather than calibrated findings. Severity, asset binding, reproduction steps, and closure criteria live inside the PDF where the queue cannot read them. The fix is opening one structured finding per pentest finding on the engagement record, with the calibrated CVSS vector, the affected asset, the reproduction steps, and the original tester evidence attached, so the queue reads against findings rather than against pages.

Severity is whatever the tester wrote, never recalibrated

Severity is inherited from the report without environmental modifiers (tenancy isolation, exposure, data sensitivity, compensating controls already in place, asset criticality tier). The queue ranks by tester opinion rather than by calibrated organisational impact. The fix is recalibrating CVSS 3.1 vectors with environmental modifiers at intake, recording the rationale on the finding record, so the queue ranks by defensible severity and the audit lookback can read why the calibrated value differs from the tester default.

No dedup against existing scanner or prior pentest findings

Findings are created without checking the existing catalogue. The same exposure shows up as a scanner finding, a prior-pentest finding, and a current-pentest finding under three titles with three timestamps and three severities. The fix is running intake against the engagement and workspace finding history before findings land, pairing the tester evidence to the existing open record where applicable, and recording the dedup decision on the activity log so the audit lookback can reconstruct it.

Findings are imported but never assigned

The intake pass produces structured records but the named owner per finding is missing. The SLA clock cannot start because there is nobody to start it against, and the queue accumulates unowned findings the first remediation review reads as a blocker rather than as work-in-flight. The fix is resolving the named owner from the asset ownership map at intake and assigning the finding on the record, so routing is part of the intake pass rather than a separate sweep two weeks later.

Retest creates parallel records that do not bind to the originals

When the firm comes back for retest, the retest report opens fresh records rather than verifying the existing ones. The original findings stay open in the queue, the retest findings are marked closed in a separate batch, and the closure history of any single underlying issue lives across two records the audit lookback has to reconcile. The fix is binding the retest evidence to the original finding so the state machine is one record from open to verified closed, with regression at retest reopening the original record rather than creating a new one.

The pentest evidence chain rebuilds itself every audit cycle

The PDF, the attestation letter, the retest PDF, the remediation evidence, and the activity log audit trail live in different systems with different access controls and different retention policies. The audit asks for the chain and the team rebuilds it under deadline. The fix is keeping the full chain on the engagement record (document management for the artefacts, findings management for the structured records, activity log with CSV export for the trail) so the chain is read from one place rather than reassembled from four.

Six fields the intake records on the engagement

A defensible intake is six concrete fields on the engagement and per-finding record, not an abstract paragraph in a vulnerability management policy. Anything missing from the list below is a known gap the audit lookback or the next remediation review surfaces as a rework cycle.

Engagement record with report metadata

A dedicated engagement is opened per pentest with the report version, testing window, scope statement, methodology, tester names, issuing firm, and the original PDF attached. The engagement is the home of the report and everything downstream (findings, retests, attestation) binds to it.

Structured finding per pentest finding

Each item in the report becomes a structured finding on the engagement: title in concrete asset-and-issue-class language, description from the closest 300+ template match, calibrated CVSS 3.1 vector with environmental modifiers reasoned for the deployed environment, affected asset, reproduction steps, request and response or trace evidence, and the named remediation owner.

Dedup decision against existing catalogue

The intake pass runs each candidate finding against the existing engagement and workspace finding history. A pentest finding that already exists in the catalogue is paired to the open record with the tester evidence appended; a new finding is created with the source flagged as third-party pentest. The dedup decision is recorded on the activity log so the audit lookback can reconstruct it.

Named remediation owner and SLA clock

Asset ownership mapping resolves the named developer or team owner per finding at intake. The SLA clock starts on assignment and the queue order reflects time-remaining-against-SLA rather than creation order, so the closest-to-slipping item surfaces first and aging is observable as a programme metric.

Retest binding and closure trail

Retest evidence (re-run reproduction, scanner re-run, configuration verification, manual reproduction attempt) attaches to the original finding rather than to a parallel record. The closure trail reads as one record from open to verified closed, regression at retest reopens the same record with the new context, and the firm retest report verifies the same items the internal queue tracked.

Audit-ready evidence chain on one record

The original PDF, the attestation letter, the retest PDF, the per-finding remediation evidence, and the activity log audit trail all live on the engagement. CSV export of findings, control-mapped status, and the activity log is available so the chain reads from one place when the auditor asks for it rather than being reassembled from four.

Pentest report intake operating checklist

At report receipt, at intake, and at retest, the security reviewer walks a short checklist against the engagement record. Each item takes minutes; missing any one of them is the source of the failure modes above and the rework cycles that follow.

  • Open a dedicated engagement record per third-party pentest with the report version, testing window, scope statement, methodology, tester names, issuing firm, and the original PDF attached through document management
  • For each pentest finding, open a structured finding on the engagement with the title in concrete asset-and-issue-class language, the closest matching template applied, and the calibrated CVSS 3.1 vector with environmental modifiers reasoned on the record
  • Capture the tester reproduction steps, the request and response (or the SAST trace for code-level findings), the supporting screenshot or video, and the environmental conditions on the finding record so the next reader can re-run the test
  • Run the intake against the existing engagement and workspace finding history to flag duplicates against scanner output, prior pentest findings, or open exception records, and record the dedup decision on the activity log
  • Assign the named remediation owner from the asset ownership map at intake (developer, team, or vendor contact) so the SLA clock can start and the queue does not accumulate unowned items
  • For findings against vendor or partner systems, route through the branded client portal so the vendor reads the finding on the same record the security team operates against, with scoped access rather than full workspace access
  • For findings that the existing scanners can reproduce, add the affected asset to the next external, authenticated, or code scan so the scanner re-run becomes part of the closure evidence chain
  • Retest against the recorded reproduction steps in the same environment, attach the retest evidence to the original finding rather than to a parallel record, and move the state to verified closed on the same record
  • When the firm issues the retest report or attestation letter, attach it to the engagement, confirm the firm-verified item set matches the internally verified item set, and resolve any divergence on the record before closing the engagement
  • Trigger AI report generation against the engagement so the leadership view, the remediation summary, and the audit lookback all read against the same finding catalogue rather than against the original PDF

How third-party pentest intake looks in SecPortal

Intake runs on the same feature surfaces the rest of the security programme already uses: engagement management, document management, findings management with calibrated CVSS and 300+ templates, bulk finding import with column mapping, repository connections, authenticated and external scanning for closure evidence, the client portal for vendor routing, AI report generation, the activity log, and compliance tracking. The discipline is binding the report, the structured findings, the remediation evidence, and the audit trail to one engagement record so the next reader has one place to read against.

Engagement management as the report home

Each third-party pentest opens a dedicated engagement with the report metadata, scope, methodology, testing window, and original PDF on the record. The engagement is the durable home of the assessment so the report version, the finding set, and the retest results all bind to one parent rather than to scattered artefacts.

Document management for the PDF and attestation chain

The original report PDF, the rules of engagement, the tester attestation letter, the retest report, the remediation evidence per finding, and any supporting artefacts attach through document management on the engagement so the artefact chain lives next to the structured records rather than in a side folder the next reader cannot find.

Findings management with calibrated CVSS and 300+ templates

Each pentest finding becomes a structured record in findings management with auto-calculated CVSS 3.1 vectors, environmental modifiers reasoned on the record, and a description sourced from the 300+ finding template library so issue-class language stays consistent with the rest of the catalogue rather than reading as an isolated tester voice.

Bulk finding import for backlog onboarding

When the report ships with a CSV export, an XML attachment, or a Nessus or Burp Suite file produced during the engagement, run intake through bulk finding import with column mapping (title, description, severity, CVSS vector, asset, CWE, status). The intake pass dedupes against the existing catalogue and stages every imported row for validation rather than dropping raw rows into the live queue.

Repository connections for code-level findings

For findings against application code, repository connections through GitHub, GitLab, or Bitbucket OAuth bind the finding to the file path, line range, and commit reference so the remediation owner reads the finding next to the code rather than searching the repository from a free-text description.

Authenticated and external scanning for closure evidence

For findings the existing scanners can reproduce, queue the affected asset on the next authenticated scan or external scan so the scanner re-run becomes part of the closure evidence chain alongside the firm-led retest.

Client portal for vendor and partner routing

When the remediation owner sits in a different organisation (a vendor team, a contracted developer, a partner engineering team), the branded client portal on the tenant subdomain grants scoped access to the finding so the cross-organisation conversation lives on the same record as the internal queue.

AI report generation against the live finding catalogue

AI report generation derives the leadership summary, the remediation roadmap, and the executive narrative from the live finding catalogue rather than from the original PDF, so the leadership view stays synchronised with the operational queue between cycles.

Activity log as the audit trail

Every state event (engagement opened, report attached, findings imported, dedup decision recorded, owner assigned, severity recalibrated, retest verified, finding reopened, finding closed, exception accepted) lands on the activity log with timestamp and user attribution. The CSV export is the trail the SOC 2, ISO 27001, PCI DSS, and NIST SP 800-53 review reads behind the closure claim.

What auditors expect from third-party pentest evidence

Pentest intake is the record the assessor reads behind the third-party testing claim. The frameworks below all expect the programme to demonstrate that the report exists, that the findings became structured remediation records, that severity was calibrated for the deployed environment, that named owners closed the work, and that the closure trail binds to the original findings. An intake workflow that reads as one record satisfies the audit ask without the post-hoc reconstruction sprint.

FrameworkWhat the audit expects
ISO 27001 Annex AA.5.7 (threat intelligence) and A.8.8 (management of technical vulnerabilities) for the structured intake of third-party pentest findings, calibrated severity, named ownership, SLA tracking, and closure trail; A.5.10 (acceptable use of information and assets) for the asset binding on each finding; A.5.36 (compliance with policies and standards) for the recorded fix expectation and acceptable evidence standard; A.5.37 (documented operating procedures) for the intake workflow itself.
SOC 2 (TSC)CC4.1 (monitoring activities) for retest cadence and validation evidence per finding; CC7.1 (system monitoring) for the source detection and intake of third-party pentest findings; CC7.2 (responding to identified system anomalies) for the structured intake and routing pass; CC7.4 (responding to security incidents) for the closure trail and the firm-verified vs internally-verified reconciliation.
NIST SP 800-53 Rev. 5CA-8 (penetration testing) for the engagement-level evidence (scope, methodology, tester identity, results); RA-5 (vulnerability monitoring and scanning) for the calibrated severity and the dedup against existing scanner output; SI-2 (flaw remediation) for the routing, remediation tracking, and retest binding per finding; AU-2 and AU-12 for the activity log trail behind every state transition.
PCI DSS v4.011.4 (penetration testing programme) for the structured intake of internal and external pentest findings, methodology coverage, and remediation tracking; 6.3.3 (remediating identified vulnerabilities) for the per-finding closure trail; 11.3.3 (vulnerability management) for the dedup against the existing scanner catalogue; 12.10 for the evidence chain that supports incident response readiness when a pentest finding is also a security event.
OWASP SAMMVerification function (Security Testing practice) for the structured intake of third-party pentest results alongside internal testing; Verification function (Issue Management practice) for the per-finding routing, severity calibration, and closure trail; Operations function (Defect Management practice) for the retest cadence and the activity log audit trail behind closure.

Where pentest report intake sits in the wider programme

Third-party pentest intake is one lane of the broader vulnerability management programme. It composes with internal scanner output, prior pentest backlogs, manual review findings, and the closure cadence so the queue stays one record rather than splintering into a per-source collection of trackers.

Upstream and adjacent

Intake reads alongside scanner result triage for internal scanner output that overlaps with pentest findings, bulk finding import for the CSV, Nessus, or Burp Suite files the report often ships with, asset ownership mapping for the named remediation owner the routing resolves to, and vulnerability prioritisation for the calibrated queue order the intake feeds.

Downstream and reporting

The intake feeds into remediation tracking for the closure cadence, retesting for the firm-led validation against the original findings, security finding evidence packaging for the per-finding handoff to engineering, vulnerability acceptance and exception management when closure is not the right answer, security leadership reporting for the cadence leadership reads the closure trail against, and audit evidence retention and disposal for the long-tail evidence chain the report PDF anchors.

Pair the intake workflow with the buyer and operator material

The intake workflow is operational; the surrounding buyer, blog, and framework material explains the methodology, programme economics, and audit-evidence expectations the intake calibrates against. Pair this workflow with the vulnerability management programme guide for the broader programme operating model, the security findings deduplication guide for the dedup discipline pentest intake reads against, the risk-based vulnerability management buyer guide for the platform evaluation criteria, and the vulnerability remediation throughput research for the throughput inputs the intake quality calibrates against. Compliance evidence reads against the ISO 27001 Annex A controls, the SOC 2 Trust Services Criteria, the NIST SP 800-53 CA, RA, and SI control families, the PCI DSS 11.4 penetration testing programme requirement, and the OWASP SAMM Verification function.

Buyer and operator pairing

Pentest report intake is the workflow internal security teams run as the bridge between commissioned third-party testing and operational remediation, AppSec teams run when pentest findings land against application code, vulnerability management teams run as a parallel intake lane alongside scanner output, product security teams run for findings against the product surface, GRC and compliance teams read the closure trail behind the third-party testing claim, and CISOs read pentest intake quality as the leading indicator behind whether the testing investment reaches the closure record.

What good third-party pentest intake feels like

The PDF has a structured backlog

Within the same week the report arrives, every finding is a structured record on the engagement with calibrated severity, named owner, reproduction evidence, and SLA clock. The PDF survives as audit evidence; the queue runs against findings. Closure and aging are observable as programme metrics rather than as items inside a document.

One queue, not four parallel ones

Pentest findings dedupe against the existing scanner output and prior pentest backlog. Where they overlap, evidence appends to the existing open record; where they are new, they land with the source flagged as third-party pentest. The team works one queue instead of reconciling four.

Retest binds to the original

Six weeks later when the firm comes back, retest evidence attaches to the original finding rather than to a parallel record. The closure history reads as one record from open to verified closed, and any regression at retest reopens the same record with the new context attached. The firm-verified item set reconciles to the internally-verified item set on the same record.

Audit evidence is one chain

The original PDF, the rules of engagement, the attestation letter, the retest report, the per-finding remediation evidence, and the activity log audit trail all live on the engagement. CSV export of findings, control-mapped status, and the activity log gives the auditor the trail in their own format. The chain reads from one place rather than from four under deadline.

A defensible third-party pentest intake is the structured engagement record on the day the PDF arrives, not the forwarded email and the open spreadsheet. Run the intake on the engagement so the report has one home, the findings become structured records with calibrated severity, dedup against the existing catalogue happens before the queue grows, named owners inherit from the asset ownership map, retest evidence binds to the originals, and the audit chain reads from one place. For the per-finding evidence-quality contract the intake reads against, pair this workflow with the security finding evidence package workflow; for the closure cadence after intake, pair it with the remediation tracking workflow; for the firm-led retest cycle, pair it with the retesting workflow; for the long-tail evidence chain, pair it with the audit evidence retention and disposal workflow.

Frequently asked questions about third-party pentest report intake

How is third-party pentest report intake different from receiving the PDF and forwarding it to engineering?

Forwarding the PDF moves the artefact; intake turns the artefact into a working backlog. Intake opens an engagement per pentest, extracts each finding into a structured record on the engagement, recalibrates severity for the deployed environment, dedupes against the existing scanner and prior-pentest catalogue, assigns the named remediation owner from the asset ownership map, starts the SLA clock, and binds the eventual retest evidence to the original finding. The PDF survives as audit evidence; the queue runs against findings.

Why recalibrate severity at intake instead of trusting the tester rating?

Tester ratings are based on what the tester observed in the test environment with the test data. The deployed environment has different tenancy isolation, different data sensitivity, different compensating controls already in place, and different exposure to the internet. CVSS 3.1 environmental modifiers are designed for exactly this recalibration: the tester base score stays on the record, the environmental modifiers are reasoned on the record, and the calibrated severity reflects organisational impact rather than test-environment impact. The recalibration is not disagreement with the tester; it is the second half of CVSS the tester is not in a position to do.

How does dedup work when a scanner already raised the same finding?

Intake runs each candidate finding against the existing engagement and workspace finding history. When a pentest finding matches an existing scanner finding (same asset, same issue class, same code path or endpoint), the tester evidence is appended to the existing open record rather than creating a parallel finding. The dedup decision is recorded on the activity log with the rationale (matched on asset and CWE, matched on code path, matched on endpoint and parameter) so the audit lookback can reconstruct why two signals collapsed onto one record. Where dedup is genuinely uncertain, the finding lands as new with a cross-reference to the candidate match.

How does the workflow handle a retest when the firm comes back six weeks later?

The retest engagement reads against the same finding catalogue the internal team has been working from. Each retest result attaches to the original finding rather than opening a new record. When the firm marks an item closed at retest and the internal evidence agrees, the state moves to verified closed on the same record. When the firm and the internal evidence disagree, the divergence is resolved on the record (re-running the reproduction together, recalibrating severity if the environment has changed, accepting an exception with named compensating controls) rather than as a parallel debate.

What happens to findings the firm rated critical that are not actually exploitable in production?

These findings still land as structured records, but the calibration step recalibrates severity using CVSS environmental modifiers (modified attack vector, modified attack complexity, modified privileges required, modified user interaction, modified scope, modified confidentiality, integrity, and availability) reasoned on the record. The recalibration rationale is on the finding so the audit lookback can read why the calibrated severity differs from the tester rating. Where the finding is genuinely not exploitable in the deployed environment, the closure path is the structured exception flow with named compensating controls and an expiry date rather than silent dismissal.

How does the workflow keep the report PDF as audit evidence?

Document management on the engagement holds the original PDF, the rules of engagement, the tester attestation letter, the retest report, and any supporting artefacts. The engagement record is the durable parent so the artefact chain stays bound to the structured findings, the remediation evidence, and the activity log audit trail. The CSV export of findings, control-mapped status, and the activity log is the trail the auditor reads behind the closure claim, with the PDF available alongside as the original assessment record.

Where does the workflow sit alongside internal scanner output and prior pentests?

The workflow runs as the third-party intake lane of the broader vulnerability management programme. Internal scanner output (external, authenticated, code) feeds the same finding catalogue through the scanner result triage workflow. Prior pentest reports are loaded the same way through bulk finding import, with dedup against the current catalogue. The result is one queue the security team works against rather than four parallel tracks (current pentest, prior pentest, scanner, manual review) that drift between assessments.

How does AI report generation use the imported findings?

AI report generation derives the leadership summary, the remediation roadmap, the executive narrative, and the compliance-ready summary from the live finding catalogue on the engagement record rather than from the original PDF. The leadership view, the operational queue, and the audit lookback all read from the same source so the leadership deck does not drift from the operational reality between cycles. When the firm issues a follow-on attestation letter, the underlying finding catalogue and the platform-generated summaries already reconcile to the firm-verified record.

How it works in SecPortal

A streamlined workflow from start to finish.

1

Open one engagement per pentest with the report on the record

Open a dedicated engagement record per third-party pentest with the report version, the testing window, the scope statement, the methodology, the tester names, the issuing firm, and the original PDF attached through document management. The engagement is the durable home of the report so everything downstream binds to it.

2

Extract each finding into a structured record

Each item in the report becomes a structured finding on the engagement: title in concrete asset-and-issue-class language, description from the closest 300+ template match, calibrated CVSS 3.1 vector with environmental modifiers reasoned for the deployed environment, affected asset, reproduction steps, request and response or trace evidence, and the original tester evidence attached. The remediation queue reads against findings rather than against pages.

3

Deduplicate against the existing scanner and prior pentest catalogue

Run the intake against the existing engagement and workspace finding history. A pentest finding that the internal scanner already raised pairs to the existing open record with the tester evidence appended; a genuinely new finding lands with the source flagged as third-party pentest. The dedup decision is recorded on the activity log so the audit lookback can reconstruct it.

4

Assign named owners and start the SLA clock

Asset ownership mapping resolves the named developer or team owner per finding at intake. For findings against vendor or partner systems, the branded client portal grants scoped access so the cross-organisation owner reads the finding on the same record. The SLA clock starts on assignment and the queue order surfaces the closest-to-slipping item first.

5

Track remediation and bind retest evidence to the original

Each finding moves through the standard remediation states. When the firm comes back for retest, retest evidence attaches to the original finding rather than to a parallel record. Where the scanner can reproduce the issue, queue the affected asset on the next authenticated, external, or code scan so the scanner re-run becomes part of the closure evidence chain alongside the firm-led retest.

6

Preserve the report and the chain as one audit artefact

The original PDF, the rules of engagement, the attestation letter, the retest report, the per-finding remediation evidence, and the activity log audit trail all live on the engagement. CSV export of findings, control-mapped status, and the activity log gives the auditor the trail in their own format. AI report generation derives the leadership summary from the live finding catalogue rather than from the original PDF so the leadership view, the operational queue, and the audit lookback all read against one source.

Run third-party pentest intake on one defensible record

Engagement, structured findings, calibrated severity, dedup, routing, retest binding, and the audit chain on a single workspace. Start free.

No credit card required. Free plan available forever.