Scanner result triage
from raw scan output to validated findings
Turn raw Nessus, Burp Suite, and SAST output into a clean, deduplicated, severity-calibrated finding list without rebuilding the work each engagement. Triage on the engagement record, validate before the report writes itself, and keep the audit trail intact from import through closure.
No credit card required. Free plan available forever.
Scanner result triage is the work that decides whether the report holds up
Pentest teams that import scanner output and ship it to a client without a triage step produce reports that fail at retest. False positives reach the executive summary. The same TLS issue shows up three times under three different titles. Severity ratings are copied from the scanner default and bear no relation to the environment. The work was done; the triage discipline was missing, and the deliverable carries the consequences.
SecPortal models triage on the engagement record so the work that turns raw scanner output into a clean finding list is captured, reviewed, and exported as part of the audit trail. Every imported finding starts as unvalidated, every reproduction attempt records evidence on the same record, every duplicate points at a canonical finding rather than copying content, and every severity calibration captures the original vector next to the calibrated one. The deliverable is built from confirmed findings only, and the next engagement starts with the prior triage work intact rather than from a blank import.
Six triage states every finding moves through
Triage is a state machine, not a single review pass. Findings move through six explicit states from import to closure, and each state carries its own evidence, audit, and sign-off requirements. Without explicit states, a finding either rushes to the report or gets stuck in a triage queue with no resolution.
| State | What it means |
|---|---|
| Imported | The finding has just landed from a scanner export. The original tool, file name, scan ID, raw severity, and CVE or CWE reference are preserved verbatim. Nothing is reportable from this state; the import only acts as a queue for the triager. |
| In review | A triager has picked up the finding and started reproduction. Notes, request and response captures, and screenshots accumulate on the finding record. The finding is visible to the engagement lead but still excluded from any client deliverable. |
| Confirmed | Reproduction succeeded. Severity is calibrated for the environment, evidence is attached, and the finding is promoted into the deliverable. The audit trail records who confirmed it, when, and against which scanner output. |
| False positive | Reproduction failed or the scanner misclassified the response. The finding stays on the engagement record with the reason, the reproduction attempt, and the supporting evidence so the same false positive does not get re-validated on the next scan. |
| Duplicate of | A second tool surfaced the same underlying issue. The finding is merged into the canonical entry rather than deleted, with the source link preserved so the report can show coverage from multiple scanners without inflating the count. |
| Accepted risk | The client has decided not to remediate. The decision, the date, the approver, and any compensating controls live on the finding so the next assessment does not surface it as a new issue. |
Where scanner-driven reports usually break
Five failure modes recur across pentest teams that have not yet pulled triage onto the engagement record. Each one is a structural problem with a structural fix, and each one shows up at exactly the wrong moment: at retest, in front of the client, or during the next audit cycle.
Scanner output goes straight to the report
The team imports a .nessus file and the import becomes the executive summary. Clients receive a list of findings the team has not reproduced, false positives the scanner flagged on assets that no longer exist, and severity ratings that do not reflect the environment. The first retest fails on findings that were never real.
Three tools, three duplicate counts
Nessus reports a missing TLS cipher, Burp Suite reports a weak TLS configuration, and a vendor SAST tool reports an outdated crypto library. They are the same underlying issue. Without dedup, the report shows three findings and the client thinks the application has three problems instead of one.
Unvalidated false positives age into the next engagement
A finding imported as Critical sits in the queue until the report deadline forces it out. It gets included as Critical, the client schedules an engineering sprint to fix it, and the engineer discovers it is a false positive a week later. The relationship suffers more than the security posture would have.
Severity gets copied from the scanner default
A scanner ranks every missing security header as High by default. The internal admin panel behind a VPN, the public marketing site, and the customer-facing application all get the same rating. The report cannot be prioritised because every finding looks identical and the actual risk picture is invisible.
Evidence lives in the scratch folder, not the finding
The triager validates a SQL injection finding, takes a screenshot, copies the request to a notepad, and moves on. Three weeks later the report writer cannot find the screenshot, cannot reproduce the request, and writes a paragraph that does not stand up to client scrutiny. The validation work is lost because the evidence never landed on the record.
Four signals to deduplicate findings across tools
Dedup is the first triage step that pays for itself. Multiple scanners frequently report the same underlying vulnerability in different language, and reporting them as separate findings inflates the count, confuses prioritisation, and embarrasses the team at the first client question. Use these four signals as a layered dedup key.
Affected asset and parameter
The same hostname, URL, parameter, and method across two scanner outputs is the strongest dedup signal. Two findings on the same parameter usually describe the same vulnerability even if the scanners label them differently.
CWE or CVE identifier match
When two findings share a CWE or CVE reference, they are almost always the same root issue. Use the identifier as a primary dedup key, then confirm with the affected asset rather than the scanner-supplied title.
Request and response shape
For DAST and authenticated scan output, the literal request line and a hash of the response body distinguish a true duplicate from two different findings on the same parameter. If both tools reproduce the same observation, they are reporting the same issue.
Source code location
For SAST and SCA output, the file path and line number, or the package name and version, anchor the finding. Two SAST tools flagging the same line of the same function are reporting one finding, not two.
For a deeper treatment of dedup heuristics across scanner outputs, see the security findings deduplication guide.
Five factors that recalibrate severity from the scanner default
Scanners apply a generic CVSS vector to a generic asset class. Real environments are not generic. Severity calibration is the explicit record of what changed between the scanner default and the rating that ships to the client, and why. These five factors cover most of the calibration work in a typical engagement.
Exposure
A finding on an internet-facing asset is not the same as the same finding on an admin panel behind a VPN and a hardware token. Adjust Attack Vector and Privileges Required to reflect what an unauthenticated attacker can actually reach.
Tenancy and blast radius
In multi-tenant SaaS, an authorization issue that crosses the tenant boundary is far worse than the scanner default suggests. Adjust Scope and Confidentiality Impact upward when the underlying flaw allows cross-tenant exposure.
Data sensitivity
A reflected XSS on a marketing page and a reflected XSS on a session-bearing application page have different real-world impact. Calibrate Confidentiality and Integrity Impact for the data and actions the page actually exposes.
Exploit availability
A 2014 CVE on a stable dependency without a public exploit is a different finding from a 2024 CVE with a metasploit module shipping the same week. Use the temporal metrics (Exploit Code Maturity, Remediation Level) to reflect what attackers can do today, not what the scanner database recorded at publication.
Compensating controls
A WAF, a virtual patch, network segmentation, or a non-default configuration can reduce the realised severity. Capture the compensating control on the finding so the calibrated rating is auditable rather than implicit.
For the calibration discipline behind these factors, see the research on severity calibration for pentest findings, and pair it with the vulnerability prioritisation framework. Once a finding has cleared triage and a calibrated severity is on the record, it enters the vulnerability prioritisation workflow that combines CVSS, EPSS, KEV exploitation status, asset criticality, exposure, and compensating controls into the queue order owners actually work.
Reviewer checklist for a triaged finding list
Before any scanner-driven finding ships to a client, the engagement lead runs through a short checklist on every finding. The checklist is short by design: each line takes seconds to verify, and missing any one of them is the source of the failure modes above.
- Every finding has a source: tool name, scan ID, file name, raw severity, and original timestamp.
- No finding moves to Confirmed without a reproduction attempt and at least one piece of evidence (request and response, screenshot, or console output) attached.
- Severity calibration is recorded with the original CVSS vector, the calibrated CVSS vector, and a one-line reason for any change.
- False positives are kept on the record with the reproduction attempt, not deleted, so the next scan does not resurface them as new.
- Duplicates point at the canonical finding with a Duplicate of link rather than copying the same content into a second record.
- Accepted risks carry the approver, the approval date, the rationale, and any compensating controls before they are closed.
- Reviewer sign-off is captured on the finding record, not in a Slack thread, and the sign-off list is the audit trail of record.
How triage looks in SecPortal
The platform supports the triage workflow on three points: import, validation, and delivery. Each is anchored to features that already power the wider engagement record.
Import
Pull Nessus (.nessus), Burp Suite (.xml), or any CSV onto the engagement record. Imports preserve the source tool, scan ID, and original severity verbatim. SAST and SCA output from the integrated code scanning module lands in the same finding queue. For backlog migrations across many engagements, bulk finding import handles column mapping and dedup at scale.
Validation
Reproduction notes, request and response captures, screenshots, payloads, and console output attach directly to the finding through findings management. The authenticated scanner replays validations against the same target so verification is reproducible.
Delivery
Only Confirmed findings flow into AI reports and the branded client portal. False positives, duplicates, and accepted risks remain on the record for the audit trail without polluting the deliverable.
Where this fits across the engagement lifecycle
Scanner result triage sits between the scanning step and the report. It is the discipline that connects them. The wider engagement record runs through related workflows on the same platform.
Upstream and downstream
Triage takes the output of vulnerability assessment and DevSecOps scanning and feeds the pentest report delivery workflow with a clean, defensible finding list.
Closure and retest
Triaged findings inherit the named asset owner from the upstream asset ownership mapping workflow, then flow into scanner to ticket handoff governance so the routing layer to engineering tickets keeps the security record canonical, then into remediation tracking and retesting. Evidence captured during triage powers pentest evidence management so the audit trail survives delivery.
Pair the workflow with the long-form guides
Triage is operational; the surrounding guides explain the deeper trade-offs. Pair this workflow with the writeups on authenticated vs unauthenticated scanning for upstream scan posture, the vulnerability scanner false positives guide for the suppression discipline that anchors triage decisions, the SAST vs SCA explainer for code-side imports, the automating findings management guide for workflow design, and the CVSS scoring explainer for the calibration vocabulary. Each one assumes the triage record described on this page exists; this workflow is what makes those guides operational rather than abstract.
Buyer and operator pairing
The triage workflow lives on the same engagement record as pentest project management, and is the workflow pentest firms, internal security teams, SOC analysts and security operations analysts, and MSSPs run when a scanner-heavy engagement has to land in front of an executive without looking like a tool dump.
What good triage feels like
Clean signals
The deliverable shows confirmed findings only, severity reflects the environment, duplicates are merged with source links preserved, and every finding carries reproducible evidence. The client reads the report once and acts on it; the retester picks up the engagement record and starts work without rebuilding context.
Quiet failure
Triage failures rarely surface as a single dramatic mistake. They show up as a slow erosion of trust: a false positive in the executive summary, a duplicate that doubles the count, a severity rating the client cannot defend internally. The cumulative effect is what damages a relationship; the workflow is what prevents it.
Scanner result triage is the workflow that decides whether a scanner-heavy engagement produces a defensible deliverable or a noisy one. Get it right and every report ships from validated findings, every retest verifies real fixes, and every audit cycle reuses the prior triage work rather than starting from a blank import. Get it wrong and the scanner output becomes a liability rather than an input. The goal of this workflow is to make the defensible answer the path of least resistance for everyone touching the engagement. Once a finding is confirmed, the next step is the per-finding contract that ships it to engineering: the security finding evidence package for developers workflow covers the reproduction steps, request and response, fix expectations, and retest criteria the developer reads against on the same record. When the inbound finding stream is a third-party penetration test report PDF rather than a raw scanner export, the report- level intake workflow is the third-party penetration test report intake workflow, which wraps triage in the report-level discipline (engagement per pentest, severity recalibration, dedup against the existing scanner catalogue, named owner from the asset ownership map, and retest binding to the original finding).
Frequently asked questions about scanner result triage
What is scanner result triage?
Scanner result triage is the discipline of turning raw vulnerability scanner output (Nessus, Burp Suite, SAST, SCA, custom tooling) into a validated, deduplicated, severity-calibrated finding list that is safe to ship to a client. It covers import, dedup across tools, reproduction, false-positive handling, severity calibration for the environment, and reviewer sign-off. Skipping triage is the most common reason scanner-driven reports fail at retest or damage the client relationship.
How is triage different from running the scanner?
Running the scanner is one step that produces raw data. Triage is the work of validating that the raw data is real, removing duplicates between tools, calibrating severity for the environment, and capturing evidence on the finding record. A scanner without triage produces a noisy list of unverified entries. A finding list without triage cannot be reported responsibly.
Why deduplicate findings across scanners?
Different scanners use different vocabularies for the same underlying issue. Nessus might call it Weak TLS Configuration, Burp Suite might call it Outdated SSL Cipher Suites, and a SAST tool might flag the underlying crypto library directly. Reporting all three as separate findings inflates the count, confuses the client about how many real issues exist, and makes prioritisation impossible. Deduplication merges the same root issue into one finding while preserving links to each source so the report can show coverage from multiple tools without double-billing.
How should severity be calibrated?
Start with the scanner-supplied CVSS 3.1 vector as a baseline. Adjust Attack Vector and Privileges Required for real exposure (internet-facing vs internal vs VPN-protected). Adjust Scope and Confidentiality Impact for tenancy and blast radius (cross-tenant exposure should rate higher than single-user impact). Adjust temporal metrics (Exploit Code Maturity, Remediation Level) for present-day exploit availability. Capture compensating controls (WAF, virtual patch, network segmentation) on the finding so the calibrated rating is auditable.
How are false positives handled?
False positives stay on the finding record with the reproduction attempt, the screenshot or output that proved the scanner was wrong, and a one-line reason. They do not get deleted. Keeping them means the next scan against the same target does not surface them again as new findings, the audit trail shows the work that was done, and the next triager has the prior reasoning rather than starting from scratch.
What evidence should be captured during triage?
At minimum: the original scanner output excerpt, the reproduction request and response (or equivalent traffic for non-HTTP findings), at least one screenshot showing impact for confirmed findings, the affected asset identifier, and the calibrated CVSS 3.1 vector. For false positives: the reproduction attempt and the reason it failed. For duplicates: the link to the canonical finding. For accepted risks: the approver, the date, and the compensating controls.
Who should sign off on triaged findings?
Two-tier sign-off is the practical pattern. The triager confirms reproduction and calibrates severity. The engagement lead reviews the calibration, the evidence, and the reportability of each finding before it ships to the client. For high and critical findings, a peer reviewer (a second tester not involved in the original triage) provides an independent check on severity. The sign-off list is part of the audit trail, captured on the finding record rather than in chat.
How does triage carry into the retest?
Retests pair to the original finding so the validation work is reused. The retester sees the original scanner output, the reproduction evidence, the calibrated severity, and the affected asset, and verifies whether the fix landed. New scans run against the same target produce new imports that get triaged against the prior catalogue: known false positives are flagged, known fixed issues stay closed, and only genuinely new findings enter the queue. The discipline pays itself back across every cycle.
How it works in SecPortal
A streamlined workflow from start to finish.
Import the raw scan output
Pull Nessus (.nessus), Burp Suite (.xml), or any CSV onto the engagement record. Findings land with their source file, scan ID, and original tool severity preserved so the audit trail starts at import rather than at triage.
Deduplicate across tools
The same vulnerability often shows up in Nessus, Burp, and a SAST report under three different titles. Merge duplicates onto one finding so the count reflects real risk and the report does not double-bill the same issue.
Validate before promoting
Every imported finding starts as unvalidated. The triager reproduces, marks as confirmed, false positive, or accepted, and attaches the request and response evidence that supports the call. Unvalidated entries do not reach the report.
Calibrate severity for the environment
Adjust the auto-imported CVSS 3.1 vector for environmental modifiers: tenancy, exposure, data sensitivity, and exploit availability. The reviewer sees the scanner default and the calibrated severity side by side so changes are defensible.
Ship the deliverable from the live record
AI-generated reports pull from validated findings only, the branded portal shows the client what was confirmed, and retests pair to the original finding so the validation effort carries through closure.
Features that power this workflow
Triage scanner output without the spreadsheet shuffle
Import, deduplicate, validate, and calibrate on one engagement record. Start free.
No credit card required. Free plan available forever.