Scanner Evidence Chain: From Scan Execution to Closed Finding
A vulnerability that closes is only as defensible as the evidence chain behind it. The scanner produced detection. Triage produced calibrated severity. Engineering produced a fix. Retest confirmed the fix. Audit reads the closed finding three months later and asks one question: where is the chain that proves each step? When the chain is one record, the answer is a single query. When the chain is scattered across scanner UIs, ticket systems, chat threads, and spreadsheets, the answer is a project.
This guide covers the end-to-end scanner evidence chain that internal security teams, AppSec teams, vulnerability management teams, and GRC teams need to operate so every closed finding traces in one record back to the scan that found it. It walks through the seven evidence layers, six failure modes, the audit framework expectations the chain satisfies, and the platform behaviour that holds detection, triage, remediation, and closure on the same engagement record.
The chain is the single record from scan to closure
The scanner evidence chain is the set of references that connect every closed vulnerability back to the scan that originally detected it. The chain is not a spreadsheet of scan dates and finding identifiers. It is a record where each finding holds a verifiable pointer to the scan execution that produced it, the actor who triggered the scan, the modules that ran, the credential reference for authenticated scans, the triage decisions that calibrated severity, the remediation work that closed it, and the retest evidence that confirmed the fix.
The defining property of the chain is that any closed finding can be reproduced without the original tester being present. A new operator opening the engagement three months after closure should be able to read the record and reconstruct: which scan ran, when, with what coverage, what the scanner emitted, how triage calibrated severity, what fix engineering applied, and how retest confirmed the fix. The chain is the operational form of the audit answer.
A working chain is not the same as an audit-grade chain
A working chain lets the team make the next operational decision (retest this, dedup that, escalate this). An audit-grade chain lets the team defend the operation under external review. The two diverge on reproducibility, dual-recording of source and canonical values, and survival across personnel and tool turnover. The chain has to satisfy the second standard, not just the first.
Seven evidence layers the chain has to carry
Every closed finding in the engagement record carries seven evidence layers. The layers map to the operational stages the finding moved through, and the audit reader traces each closed finding by reading the layers in order.
1. Scan execution reference
The scan_execution record that produced the detection. For external and authenticated scans, the execution carries the workspace reference, the domain reference, the scan_type, the scan_category, the credential reference for authenticated runs, the target, the modules_total list, the modules_completed list, the actor (initiated_by), the started_at and completed_at timestamps, and the result_summary. For code scans, the code_scan_execution carries the repo reference, the branch, the commit SHA, the trigger, and the result_summary. Every finding produced by an in-platform scan binds to the scan execution that produced it as the upstream evidence anchor.
2. Module and rule reference
The detection module or rule that produced the finding. For external scans, this is the scanner module name (SSL/TLS, headers, ports, tech detection, DNS analysis, subdomain enumeration). For authenticated scans, this is the authenticated DAST module reference and the authentication mode used (cookie, bearer, basic, form). For code scans, this is the Semgrep rule identifier or the dependency analysis vulnerability reference. The rule reference is what lets a retest re-execute the same detection rather than running a different check that happens to land on the same target.
3. Asset and target binding
The asset the finding applies to. For network and web findings, this is the affected_asset (host, URL, port). For code findings, this is the file path and the repository reference. The asset binding is what resolves the finding to a named owner through the asset-to-owner mapping, and what lets the retest target the same asset rather than a similar one. Findings that arrive without a clean asset binding pause at routing-decision rather than landing in a shared backlog.
4. Source-emitted and platform-canonical values
The original scanner-emitted severity and the engagement-canonical severity, both retained on the finding. The scanner-emitted value reflects what the detection tool produced in its own context; the canonical value reflects the calibration that triage applied for the environment. The CVSS 3.1 vector stays on the finding alongside the scanner-emitted band so the calibration rationale (environmental metrics: confidentiality, integrity, availability requirement, modified base metrics) is reproducible. Imported findings additionally carry the source format (Nessus, Burp, CSV) and the import event reference.
5. Triage transitions and decisions
Every state transition the finding passed through, recorded in the activity log with the actor and the timestamp. Draft to canonical (open). Open to in progress (assignment). In progress to resolved (engineering closure). Resolved to verified (retest closure). The activity log also carries severity recalibration events, dedup merges, suppression decisions for false positives, and exception decisions for accepted risk. The transitions are the operational record of how the finding moved, not just where it ended.
6. Remediation evidence
The artefacts engineering produced to close the finding. For code findings, this is the commit reference or the pull request that introduced the fix. For infrastructure findings, this is the configuration change or the patch reference. For control-gap findings, this is the policy or process change. Remediation evidence binds to the finding through document_management uploads attached to the engagement, and the evidence reference stays on the closed finding so closure can be reproduced without searching for the related ticket.
7. Retest and closure binding
The retest evidence that confirmed the fix held against the same target with the same module or rule. Retest is not a parallel finding; it is a new scan execution attached to the original finding through the engagement record. The verified status records the retest scan execution reference and the actor who accepted the closure. Findings that retest fails (the condition still reproduces) reopen on the engagement with the failed retest execution attached, so the reopen is itself part of the chain rather than a fresh start.
Six failure modes that break the chain in real programmes
The chain breaks in recurring patterns. Each pattern has the same operational symptom: closure that looks defensible at the leadership view but unprovable at audit. The patterns below describe what to watch for and what discipline counters each one.
| Failure mode | Counter discipline |
|---|---|
| Scanner output pasted into spreadsheets that drop the scan execution reference | Findings land on the engagement with the scan execution attached as the upstream anchor |
| Findings re-keyed across scanner, ticket, and report systems and the source link drops at each handoff | Detection, triage, remediation, and closure live on the same engagement record without re-keying |
| Retest evidence lands in a different record than the original detection | Retest is a new scan execution attached to the original finding rather than a parallel record |
| Severity recalibrated without recording the original scanner-emitted band | Both source-emitted and platform-canonical values retained on the finding; calibration logged in the activity feed |
| Suppression and dedup decisions live in chat instead of on the finding record | Suppression and dedup recorded on the finding with reason and actor; the next scan reads the suppression rather than re-raising |
| Activity log captures user actions but not scan-system actions, so audit sees triage but not detection | Scan executions, scan jobs, and finding state transitions all land in the activity feed alongside user actions |
For the dedup discipline that prevents re-key drift in the first place, the scanner output deduplication guide covers cross-tool merge, and the vulnerability scanner false positives guide covers durable suppression that survives the next scan cycle.
How the chain reads across scan classes
The chain works the same way across scan classes, but the upstream evidence layer differs. The table below shows what each class contributes to the upstream side of the chain and what the downstream side (triage, remediation, closure) inherits.
External scanning
The scan_execution carries the verified domain reference, the modules_total list (SSL/TLS, headers, ports, tech detection, DNS analysis, subdomain enumeration on paid plans), the modules_completed list (which actually ran to completion versus partial), and the result_summary. Findings produced bind to the scan execution and the module that produced them. The chain reads cleanly because the platform owns both the scan run and the finding record. External scanning evidence pairs with scan target validation evidence at the upstream end of the chain.
Authenticated scanning
The scan_execution carries the same evidence as external plus the credential reference and the authentication mode used (cookie, bearer, basic, form). The credential reference points to the encrypted credential storage record (AES 256 GCM at rest), so the chain carries the credential lineage without exposing the credential itself. When the credential rotates, the rotation event lands in the activity log and the next authenticated scan runs against the rotated reference. Authenticated scan evidence pairs with the authenticated scanner failure modes guide so the audit reader can distinguish a finding produced by a successful authenticated scan from a finding produced by a scan that fell back to unauthenticated coverage.
Code scanning (SAST and dependency analysis)
The code_scan_execution carries the repo reference, the branch, the commit SHA at scan time, the trigger (manual or scheduled), and the result_summary. SAST findings (via Semgrep) bind to the scan execution and the rule identifier; dependency findings bind to the scan execution and the vulnerability reference. The branch and commit SHA make the chain reproducible: a retest can run against the same commit to verify the fix landed before the branch moves.
Imported scans (Nessus, Burp Suite, CSV)
Imported findings carry the import event reference and the source format instead of the scan execution reference. The import event records the actor, the engagement, the source format, the imported count, and the import timestamp. Per-finding source references (Nessus plugin name, Burp issue name, CSV row context) stay attached so the trace from finding back to source is reproducible without the source file. The importing third-party scanner results guide covers the import-side discipline that keeps the chain intact for non-platform scans.
Continuous monitoring
Scheduled scans (daily, weekly, biweekly, monthly per asset class) land each cycle as its own scan execution with the schedule reference attached. New detections either merge with an existing finding (original detection date preserved, latest detection date updated) or land as new findings with the scheduled scan as the originating reference. The scan baseline and trend comparison guide covers how to read the chain across many cycles so the cadence drives remediation rather than producing a stack of static reports.
Closure evidence the chain has to carry
Closure is the moment the chain has to be most defensible because it is the moment future audit reads. Five closure evidence requirements keep the chain audit-grade.
- Retest scan execution reference: the new scan execution that ran against the same target with the same module or rule and produced no detection of the original condition. The execution attaches to the finding rather than landing as a parallel record.
- Closure actor and timestamp: the workspace user who accepted the closure and the time the verified status was set. The activity log records both as part of the verification event.
- Remediation artefact reference: the commit, pull request, configuration change, patch, policy update, or document that delivered the fix. For code findings the artefact lives in the connected repository; for infrastructure findings it lives in the document management attachments on the engagement.
- Severity at closure: the canonical severity recorded at closure time, which may differ from the canonical severity at detection if recalibration occurred during triage. Both values stay on the finding so the calibration is reproducible.
- Open-to-verified duration: the difference between the open timestamp and the verified timestamp, which becomes the per-finding contribution to the workspace-level mean time to remediate. The activity log keeps the timestamps so the duration is reproducible.
For the closure throughput question that aggregates per-finding duration into workspace-level metrics, the vulnerability remediation throughput research covers the read patterns, and the mean time to detect vs remediate research covers the upstream-to-downstream balance across the chain.
How compliance frameworks read the evidence chain
The chain satisfies the technical vulnerability management control narrative across multiple frameworks. The expectations below set the floor; programmes justify additional discipline on a risk basis.
- PCI DSS v4.0 Requirement 6.3.3 expects identification of vulnerabilities, including risk ranking. The chain carries the scanner-emitted severity, the calibrated severity, and the calibration rationale so the ranking decision is documented at the per-finding level rather than as a standing policy.3
- PCI DSS v4.0 Requirement 11.3 expects evidence of internal and external scans against the in-scope environment. The scan_execution evidence is the operational form of this requirement; the chain from scan to closure proves the cycle ran rather than just the run.3
- PCI DSS v4.0 Requirement 11.4 expects external and internal penetration testing with retest of identified vulnerabilities. The retest binding to the original finding is the operational form of the retest requirement.3
- PCI DSS v4.0 Requirement 10.5.1expects retention of audit trail history. The activity log is the audit trail; the CSV export is the retention artefact.3
- ISO 27001:2022 Annex A 8.8 (technical vulnerability management) expects a documented programme that identifies, assesses, and addresses vulnerabilities. The chain is the documented programme in operating form.4
- ISO 27001:2022 Annex A 8.15 (logging) expects logs that record activities, exceptions, and security events. Scan executions, finding state transitions, and exception decisions all land in the activity log.4
- SOC 2 Trust Services Criteria CC7.1 and CC7.2 read the chain as evidence of system operations and anomaly identification; CC4.1 reads the chain as evidence of monitoring of controls.5
- NIST SP 800-53 RA-5 (vulnerability monitoring and scanning), SI-2 (flaw remediation), AU-2 (event logging), and AU-12 (audit record generation) together form the federal control narrative the chain satisfies.1
- NIST SP 800-218 SSDF RV.1 and RV.2(identify and confirm vulnerabilities, assess and prioritise vulnerabilities) read the code scanning side of the chain as evidence of the secure software development programme.8
- CIS Controls v8.1 Control 7(continuous vulnerability management) and Control 8 (audit log management) together describe the chain as a single control surface across detection and audit evidence.9
- OWASP SAMM Verification function(Security Testing, Issue Management) reads the chain as evidence of mature issue management at level 2 and above.7
For the wider control mapping question that ties this chain into multi-framework crosswalks, the control mapping cross-framework crosswalks workflow covers the operating discipline that prevents the same chain producing five different evidence packs for five different audits.
Operational checklist for an audit-grade evidence chain
At scan execution
- The scan runs against a verified domain or a connected repository, not against an unverified target.
- The actor who initiated the scan is recorded on the scan execution.
- The modules_total and modules_completed lists are persisted so partial runs are visible.
- For authenticated scans, the credential reference and the authentication mode are recorded.
- For code scans, the branch and commit SHA at scan time are recorded.
- The started_at and completed_at timestamps land on the scan execution.
At detection
- The finding binds to the scan_execution that produced it.
- The detection module or rule reference is attached.
- The affected_asset is set to a value that resolves to a named owner.
- The scanner-emitted severity is retained even if triage will recalibrate.
- For imported findings, the import event reference and source format are attached.
During triage
- Severity is calibrated against environmental context with a CVSS 3.1 vector recorded.
- The scanner-emitted band stays on the finding alongside the calibrated band.
- Dedup against existing findings preserves the originating sources on the merged record.
- Suppression of false positives records the reason and the actor.
- Exception decisions for accepted risk record the approver and the re-evaluation trigger.
- Each transition lands in the activity log with the actor and timestamp.
During remediation
- The finding is assigned to a named owner.
- Remediation evidence (commit, pull request, configuration change, document) attaches to the finding or to the engagement document set.
- State transitions from open to in progress to resolved are logged.
- Re-detection during the remediation window updates the existing finding rather than creating a parallel one.
At closure and retest
- Retest is a new scan execution attached to the original finding.
- The retest scan execution reference is recorded on the verified status.
- The closure actor and timestamp land on the verification event.
- Failed retest reopens the finding with the failed retest execution attached.
- The verified-to-reopen-to-verified history stays on the finding rather than fragmenting across new records.
At audit or dispute
- Each closed finding traces in one query to the scan execution that produced it.
- The scan execution reproduces the conditions of the original detection (target, module, credential reference, branch and commit for code scans).
- Severity calibration is reproducible from the source-emitted value retained on the finding.
- Triage and remediation decisions are visible alongside the detection and closure events in the activity log.
- The CSV export of the activity log carries the chain across the audit observation window without the original tester present.
For internal security, AppSec, GRC, and vulnerability management teams
The evidence chain is the operational record internal teams hand to auditors, insurers, customers running due diligence, and regulators reading post-incident. The chain pays back across all four readers if the discipline holds at the per-finding level rather than as a year-end project.
- Hold detection, triage, remediation, and closure on the same engagement record. The chain is the engagement record; it is not a separate report or a periodic evidence pack.
- Treat the activity log as the chain of custody. Every state transition, every calibration, every suppression, and every exception is part of the audit answer.
- Bind retest to the original finding rather than running it as a standalone scan. Retest evidence that lives in a different record cannot defend the closure of the finding it was meant to retest.
- Retain both source-emitted and platform-canonical severity. The calibration rationale is what an auditor reads when the calibrated band differs from the scanner-emitted band; without the source value, the rationale collapses.
- Plan retention against the longest framework expectation in the workspace. The chain that satisfies SOC 2 may need extension to satisfy PCI DSS or NIST 800-53 audit log retention requirements; the scan evidence retention and governance guide covers retention by artefact class.
For internal security teams, AppSec teams, vulnerability management teams, and GRC and compliance teams, the chain is the most useful single artefact in the security operating model because it is what an external reviewer reads when the team is not in the room.
How SecPortal holds the chain together
SecPortal binds detection, triage, remediation, and closure to one engagement record. Every layer of the chain has a verifiable home on the platform record without re-keying or cross-system reconciliation.
Scan executions persist the upstream evidence
External, authenticated, and code scans persist the scan_execution with the actor (initiated_by), the target, the modules_total and modules_completed lists, the credential reference for authenticated runs, the started_at and completed_at timestamps, and the result_summary. The scan execution is the upstream anchor every finding it produced binds to.
Findings carry the canonical fields plus source values
The findings record carries title, severity, status, description, affected_asset, remediation, CVSS 3.1 score and vector, category, control reference, and the engagement reference. Source-emitted values from imported findings (Nessus plugin name, Burp issue name, CSV row context) stay on the finding so the calibration rationale is reproducible. Closure timestamps (resolved_at, verified_at) record the end of the chain.
Activity log records every transition
Scan executions, finding state transitions, severity calibrations, dedup merges, suppressions, exception decisions, retest events, and report generation all land in the activity feed alongside user-initiated actions. The CSV export reproduces the chain across the audit observation window. The activity log feature covers the workspace-level chain of custody shape.
Document management binds remediation evidence
Remediation artefacts (configuration changes, policy updates, regulator correspondence, vendor advisories, customer disclosure documents) attach to the engagement through document_management. The document reference stays on the closed finding so closure can be reproduced without searching for the related ticket or chat thread.
AI report generation reads the chain
AI report generation reads against the canonical finding set on the engagement, including the scan execution references, the calibrated severity, and the closure evidence. Reports produced from the chain reflect the operating record rather than reconstructed history. The AI reports feature covers the report generation surface; the chain is what the report reads.
Team management RBAC and MFA gate access
Team roles (owner, admin, member, viewer, billing) gate which actors can trigger scans, calibrate severity, suppress findings, and accept retest closure. MFA enforcement at the workspace level adds the authentication discipline the chain inherits. The actor recorded on each chain event is the authenticated workspace user, not a service account that obscures the accountable human.
For the broader feature surface that the chain reads against, the findings management feature, external scanning feature, authenticated scanning feature, code scanning feature, and continuous monitoring feature describe the verified surfaces the chain inherits its evidence from.
Related scanner discipline
The chain pairs with upstream scoping and validation discipline and with downstream triage and remediation discipline. The pages below cover the surrounding decisions.
- Scan target validation and authorisation covers the upstream evidence the chain inherits before the scan runs.
- Scan evidence retention and governance covers how long the chain artefacts are retained and how disposal is operated.
- Importing third-party scanner results covers how external scanner output joins the chain through the import path.
- Scanner output deduplication covers the merge discipline that prevents the chain forking across tools.
- Scan baseline and trend comparison covers how the chain reads across many cycles for the trend view.
- Security finding evidence package for developers covers the per-finding evidence pack that the chain produces for the remediation owner.
- Scanner result triage covers the triage transitions the chain records.
- Audit evidence retention and disposal covers the longer-term retention question the chain inherits.
For wider context on how multi-tool environments stay reconcilable across the chain, the security tool coverage overlap research covers how overlapping tools join the chain without producing duplicate audit trails.
Scope and limitations of this guide
The evidence chain is an operating discipline. It is not a configuration setting, and the platform cannot enforce it against a programme that splits detection, triage, remediation, and closure across systems. The chain pays back when teams operate detection through closure on the same engagement record and treat the activity log as the audit answer.
Programmes that re-key findings across systems lose the chain at each handoff. Programmes that hold findings on one record but capture decisions in chat lose the triage layer. Programmes that retest into a new record lose the closure binding. The discipline is consistent regardless of platform; the platform makes the discipline cheap to operate rather than imposing it from the outside.
Frequently Asked Questions
Sources
- NIST, SP 800-53 Rev. 5 (RA-5 Vulnerability Monitoring and Scanning, SI-2 Flaw Remediation, AU-2 Event Logging, AU-12 Audit Record Generation)
- NIST, SP 800-115 Technical Guide to Information Security Testing and Assessment
- PCI Security Standards Council, PCI DSS v4.0 (Requirements 6.3.3, 11.3, 11.4, 10.5.1)
- ISO/IEC, ISO 27001:2022 Annex A 8.8 Technical Vulnerability Management, Annex A 8.15 Logging
- AICPA, SOC 2 Trust Services Criteria CC7.1 (System Operations), CC7.2 (Anomaly Identification), CC4.1 (Monitoring of Controls)
- FIRST, Common Vulnerability Scoring System (CVSS) v3.1 Specification (Environmental Metric Group)
- OWASP, Software Assurance Maturity Model (SAMM) Verification Function (Security Testing, Issue Management)
- NIST, SP 800-218 Secure Software Development Framework (RV.1 Identify and Confirm Vulnerabilities, RV.2 Assess and Prioritize Vulnerabilities)
- CIS Controls v8.1 (7 Continuous Vulnerability Management, 8 Audit Log Management)
- SecPortal, Findings Management Feature
- SecPortal, Activity Log Feature
- SecPortal, External Scanning Feature
Operate the scanner evidence chain on one engagement record
SecPortal binds external, authenticated, and code scan executions to the findings they produce, retains scanner-emitted and calibrated severity side by side, records every triage transition in the activity log with CSV export, attaches retest scan executions to the original finding, and reads AI reports against the canonical chain. Free plan available.