Importing Third-Party Scanner Results: Nessus, Burp, and CSV
Most enterprise vulnerability programmes already run scanners. A Nessus instance covers the network and infrastructure side. A Burp Suite licence covers the web application side. A SAST or SCA tool covers the code and dependency side. The operational question is not which scanner to choose; it is how to bring the findings those scanners already produced into one record where triage, severity calibration, retest, and closure run as one workflow rather than as four parallel ones.
This guide covers the import step that turns Nessus .nessus exports, Burp Suite XML exports, and CSV files into structured findings on the engagement record. It walks through the parsers, the severity normalisation across scanner scales, the CSV column mapping decision, the post-import triage that promotes drafts into canonical findings, and the audit trail the import has to preserve so the chain from scanner output to closed finding is reproducible.
Import preserves the work the scanner has already done
The default position in many programmes is to re-run a scan inside the new platform because that is the easiest path to a findings record. The cost is duplicate scan traffic against assets the original tool already covered, duplicate licence usage, duplicate authorisation, and a findings record with no continuity to the prior cycle. Importing the existing scanner output is the discipline that preserves the work and keeps the operational record continuous.
| Situation | Right choice |
|---|---|
| Source scanner already licensed and operational with team expertise | Import the existing output |
| Scan ran against assets behind a VPN or restricted network | Import; the platform cannot reach the assets directly |
| Backlog of historic findings in spreadsheets and prior reports | Import as CSV onto a backlog engagement |
| External web asset with no scanner licensed, simple coverage need | Run external or authenticated scan inside the platform |
| Source repository connected for SAST and dependency analysis | Run code scan inside the platform via the repository connection |
For the broader question of when to add a SecPortal scan rather than import, the external scanning feature, authenticated scanning feature, and code scanning feature describe the in-platform scan classes the engagement can run alongside imported scanner output, so the workspace mixes import and direct scanning rather than choosing one at the workspace level.
What each supported format actually carries
Three formats cover the common cases at the operational level. Each carries a different set of structured fields, and the import step parses each one into the same finding shape so the engagement record reads consistently regardless of where the output came from.
Nessus .nessus (XML)
The Nessus native export wraps the scan output in a NessusClientData_v2 root, with ReportHost entries per scanned target and ReportItem entries per finding on each host. The parser reads each item plugin name, severity integer, port, protocol, synopsis text, and solution text, reconstructs the affected asset from the host FQDN or IP combined with the port and protocol, and emits a finding with the normalised severity attached. Plugin identifiers and the source scan context stay attached to the imported finding so the trace from finding back to scanner run is reproducible.
Burp Suite .xml
The Burp Suite XML export carries an issues collection where each issue records the issue name, severity (textual: high, medium, low, info), host, path, request and response evidence, the issueBackground (the general explanation), the issueDetail (the specific instance evidence), the remediationBackground (the general fix guidance), and the remediationDetail (the specific fix). The parser joins the background and detail fields into the description and the remediation fields, strips embedded HTML so the text reads cleanly in the workspace, and concatenates host and path to produce the affected asset. The result is a finding record that preserves the request and response evidence Burp Suite produced rather than flattening it to a title and severity.
CSV (.csv)
CSV is the universal fallback when the source tool only emits CSV: a legacy spreadsheet, a prior pentest report exported as CSV, a regulator-formatted workbook, or a SAST tool whose CSV export is the only path that fits the workflow. The CSV parser reads the header row, detects suggested mappings for title, severity, description, affected asset, and remediation against common header names, and presents the proposed mapping for confirmation before the import runs. Title is the only required field; other mappings are optional. The import keeps the source CSV header reference attached to each imported finding so the audit reader can trace which column produced each value.
For format-by-format trade-offs across the broader landscape (SARIF, JSON, native scanner CSVs), the scanner output formats guide walks through what each format preserves and where each one drops fidelity.
Severity normalisation across scanner scales
Different scanners emit severity on different scales, and the engagement record reads a single normalised scale so the leadership view, the remediation queue, and the audit lookback all reconcile. The import step normalises every emitted severity into the same five bands and retains the source value so the rationale is reconstructable.
| Source scale | Normalised band |
|---|---|
| Nessus level 4 | Critical |
| Nessus level 3 / Burp high | High |
| Nessus level 2 / Burp medium | Medium |
| Nessus level 1 / Burp low | Low |
| Nessus level 0 / Burp info / informational | Info |
| CSV: critical, crit, urgent | Critical |
| CSV: numeric 1-3 (low to high) | Mapped to band by integer position |
| CSV: empty, unrecognised, or null | Left null for tester to set during triage |
Severity normalisation at import is a starting point, not the canonical severity. The import band reflects what the scanner emitted; the engagement-canonical severity comes from triage that pairs the scanner band with environmental context (asset exposure, data sensitivity, exploit availability, blast radius). The CVSS scoring guide covers how to recalibrate against environmental metrics, and the vulnerability prioritisation framework covers the wider prioritisation read once severity is set.
CSV column mapping: what to confirm before the import runs
Nessus and Burp Suite imports parse against a known schema, so there is no mapping decision to make. CSV imports are different because the same scanner class can emit CSVs with different column layouts. The import step reads the header row, proposes a mapping based on common header names, and asks the importer to confirm or override before running the import. Five fields make a workable mapping.
Title (required)
The title field carries the short label that names the finding on the engagement queue. The autodetector matches header names containing title, name, vulnerability, finding, issue, plugin name, or pluginname. Title is the only required mapping; rows with empty title are skipped at import. If no header matches the autodetector, the importer selects the title column manually before the import runs.
Severity (recommended)
The severity field maps to the five-band scale at import. The autodetector matches headers containing severity, risk, risk_factor, risk factor, cvss, rating, or priority. Numeric scales (1 to 3) and textual scales (critical, high, medium, low, info) are both normalised. If the source CSV carries a CVSS score column rather than a band, the import step normalises by recognising the column header and the value pattern. Rows with unrecognised severity get a null severity that the tester sets during triage.
Description (optional)
The description field carries the explanatory text on the finding. The autodetector matches headers containing description, detail, synopsis, summary, or overview. Where the source CSV contains a description, the import preserves it. Where the source CSV does not, the description stays null and the tester completes it during triage.
Affected asset (optional)
The affected asset field carries the host, URL, package, or file the finding applies to. The autodetector matches headers containing asset, host, ip, target, affected, or url. Asset is operationally the most important field after title and severity because the remediation owner is resolved through the asset-to-owner mapping; importing without an asset forces the tester to add one before the finding can be routed to remediation.
Remediation (optional)
The remediation field carries the suggested fix text. CSV exports from pentest tools and SAST tools sometimes include this; CSV exports from regulator-formatted spreadsheets sometimes do not. The autodetector matches headers containing remediation, fix, solution, or recommendation. Imported remediation text is a starting point that triage refines for the environment.
The mapping is confirmed before the import runs against the engagement, and the preview step shows the parsed rows with the chosen mapping applied so the importer can correct mistakes before the rows land. Title that maps to an empty column produces zero imported findings; severity that maps to an unrecognised scale produces null severity rather than a wrong band.
Post-import triage promotes drafts into canonical findings
Imported findings start as draft against the engagement so triage runs before the import becomes canonical. A draft is not on the leadership view, not on the remediation queue, and not on the client portal. Triage is the deliberate step that promotes drafts into the canonical state where the rest of the workflow inherits them.
1. Reproduce or refute
The first triage step confirms that the imported finding still applies against the asset. Findings that were valid at scan time can become invalid by the time of import (the asset was patched, decommissioned, or moved behind a different control), and the platform record should not promote a finding that no longer describes the running environment. Reproduction evidence is attached to the finding alongside the original scanner output so the audit chain carries both the scanner detection and the platform verification.
2. Calibrate severity for the environment
The imported severity reflects what the scanner emitted in its own context. The engagement-canonical severity is calibrated against the environment: an external-internet-facing finding rated high by Nessus stays high; the same finding on an isolated internal subnet may calibrate down with the rationale recorded. Severity calibration is logged so the leadership view sees the calibrated band and the audit reader sees both the source and the calibration.
3. Resolve to an asset and an owner
The affected asset on the imported finding resolves to a named owner through the asset-to-owner mapping on the engagement. Findings that arrive with an unrecognised asset pause at a routing-decision step rather than landing in a shared backlog. The asset ownership mapping for findings workflow covers how the owner is resolved so the remediation queue reads as named work rather than unowned items.
4. Suppress confirmed false positives durably
Findings that triage refutes get suppressed with a reason and an actor, and the suppression is recorded on the workspace so the next import against the same asset does not re-raise the same condition without context. The false positives guide covers the durable suppression discipline.
5. Promote the surviving findings into delivery
Findings that survive triage are promoted from draft to canonical. The leadership view, the remediation queue, the AI-generated reports, and the client portal all read against the canonical set. Drafts that were suppressed, deduplicated, or refuted stay on the workspace audit trail so the operating record reflects the full import rather than only the surviving rows.
For the operating discipline that turns a backlog of imported findings into a clean baseline, the bulk finding import workflow covers the end-to-end migration pattern, and the scanner result triage workflow covers the per-finding triage decisions that determine which drafts become canonical.
Operational limits the import step enforces
The import path is bounded by deliberate limits so a single misconfigured upload does not flood the engagement record or the activity log. The limits below apply at the platform level and surface at upload time rather than as silent truncation.
- File size cap of 20 MB per upload. Larger Nessus exports are typically scan archives across many hosts; the defensible pattern is to split the export by host group or by scan policy and import each subset against the relevant engagement.
- Up to 500 findings per import on the bulk endpoint. Larger backlogs split across multiple imports against the same engagement, with each import logged in the activity feed so the chronology is reproducible.
- Rate limit of five bulk imports per fifteen minutes per user. The rate limit applies to the bulk endpoint that writes findings rather than to the parsing step, so file selection and column mapping can be revised without consuming the rate budget.
- Engagement-scoped quota from the plan tier. The import step checks the per-engagement and lifetime finding quota before the rows land and rejects with a remaining-count message rather than partial import.
- Workspace permission gate: the bulk import permission is required on the team role. Roles without the permission see the import button as disabled rather than receiving an error mid-import.
For the broader plan-limit picture the import budget reads against, the pricing page covers the per-engagement and lifetime caps by plan, and the team management feature covers the role-based permissions the import step inherits.
Audit trail the import step preserves
The import is not just a data load. It is the moment when scanner output crosses from a tool the workspace does not control into the engagement record the workspace owns, and the audit trail has to carry the provenance. Five evidence artefacts make a defensible import record.
- Source format and originating tool: the file format detected at upload (Nessus, Burp, CSV) and the tool reference the import was attributed to. The activity log records the format on the import event.
- Import timestamp and actor: the workspace user who triggered the import and the timestamp the platform accepted the upload. The activity log carries both as part of the bulk-import event.
- Imported count and rejection reasons: the number of findings successfully imported, the number skipped (empty title), the number truncated (over 500), and the rejection reason where applicable. The count surfaces on the import preview before the run and on the activity log after the run.
- Per-finding source reference: the originating scanner reference (Nessus plugin name, Burp issue name, CSV row context) attached to each imported finding so the trace from finding back to source is reproducible without the source file.
- Triage state transitions: every move from draft to canonical, every suppression, every dedup merge, and every severity calibration logged in the activity feed with the actor and the timestamp. The import event is one row; the triage history continues from there.
For the activity-log shape the import events read against, the activity log feature covers the workspace-level chain of custody, and the scan evidence retention and governance guide covers how long the import evidence is retained alongside the scan record.
How compliance frameworks read scanner-import evidence
Imported scanner findings sit inside the same control narrative as in-platform scans: the technical vulnerability management programme. The expectations below set the floor; programmes justify additional discipline on a risk basis.
- PCI DSS v4.0 Requirement 6.3.3 expects identification of vulnerabilities including their risk ranking. The import step preserves the scanner-emitted severity and pairs it with calibrated environmental severity so the ranking decision is documented.6
- PCI DSS v4.0 Requirement 11.3 expects evidence of internal and external scans against the in-scope environment. Where the scanner is licensed outside the platform (a Nessus instance owned by operations), the import record is the bridge that lets the audit reader trace from the scan to the remediation evidence on the same workspace.6
- ISO 27001:2022 Annex A 8.8 (technical vulnerability management) expects a documented programme. The import preserves the scanner output as evidence of detection alongside the platform evidence of triage, calibration, and closure.7
- SOC 2 Trust Services Criterion CC7.1(system operations) reads the import evidence as part of the operating record of vulnerability identification and analysis.8
- NIST SP 800-53 RA-5 (vulnerability monitoring and scanning) and SI-2 (flaw remediation) expect a programme that identifies, analyses, and remediates flaws. The import is the operational form of the analysis-input step, and the engagement record is the operational form of the remediation step.4
For the broader vulnerability management programme the import step plugs into, the vulnerability prioritisation workflow covers the prioritisation read after import, remediation tracking covers the close-out side, and audit evidence retention and disposal covers the longer-term retention question the import evidence inherits.
Operational checklist for a defensible scanner-import workflow
Before the import
- The engagement that will receive the import has been created with the right scope and asset list.
- The source file format is one the parser supports natively (.nessus, .xml for Burp, or .csv).
- The file is under 20 MB; larger exports have been split by host group or scan policy.
- The importer has the bulk_import permission on the workspace team role.
- The plan-tier finding quota has headroom for the import volume.
During the import
- The format is detected automatically; if detection fails, the file is renamed or re-exported in a supported format.
- For CSV, the proposed column mapping is reviewed and overridden where the autodetector miscalled a header.
- The preview shows the parsed rows with chosen mapping; the importer corrects mistakes before promotion.
- Selection on the preview removes rows that should not be imported (duplicates of pre-existing findings, out-of-scope rows, scanner banners that are not findings).
- Severity normalisation is reviewed; rows with null severity get noted for triage.
After the import
- The activity log records the import event with actor, timestamp, count, and engagement reference.
- Imported findings are reviewed in draft and triaged before promotion into canonical state.
- False positives are suppressed with a reason and an actor; suppressions are reused on the next import.
- Severity is calibrated against environmental context; the calibration is logged.
- The asset-to-owner mapping resolves each imported asset to a named owner; unresolved assets pause at routing-decision.
At audit or dispute
- Each imported finding traces in one query to the import event that produced it.
- The import event names the source format, the actor, the timestamp, and the imported count.
- Severity normalisation rationale is reproducible from the source value retained on the finding.
- Triage state transitions (draft to canonical, suppressions, dedup merges) are visible alongside the import.
- The CSV export of the activity log carries the import evidence as part of the audit record.
For internal security, AppSec, and vulnerability management teams
Internal teams running existing scanner investments treat scanner import as the default integration path with SecPortal, not as a fallback. The Nessus instance stays where it is; the Burp Pro licence stays with the AppSec engineer who owns it; the SAST or SCA tool stays connected to the source repository. The import step brings the findings those tools produce into the engagement record where triage, severity calibration, retest, and closure run as one workflow.
- Hold the source-of-truth decision: the canonical finding lives on the engagement record; the source scanner UI is the detection record, not the operational record.
- Run imports on the same cadence as the source scanner so the engagement record stays current with the detection.
- Calibrate severity at import rather than carrying source-scanner severity through to the leadership view; the calibration is part of the operational discipline.
- Reconcile imports against the existing finding catalogue so duplicates merge with the originating source attached.
- Treat the activity log as the chain of custody from import event to closure event; the audit chain is one record rather than a reconstructed thread.
For internal security teams, vulnerability management teams, AppSec teams, and GRC and compliance teams, scanner import is often the first integration step before anything else moves. The scanner to ticket handoff governance workflow covers what happens after import once the canonical finding is routed to engineering.
How SecPortal imports third-party scanner results
SecPortal handles the import as a bounded operation against the engagement record. The supported formats are .nessus, Burp Suite .xml, and .csv. The import event lands in the activity log alongside scan executions and finding state changes, so the engagement record reads as one chronology regardless of whether the detection ran inside the platform or arrived from an external tool.
Native parsers for Nessus and Burp Suite
The Nessus parser reads ReportHost and ReportItem entries, maps the integer severity (4, 3, 2, 1, 0) to the five-band scale (critical, high, medium, low, info), reconstructs the affected asset from host FQDN or IP combined with port and protocol, and pulls synopsis and solution text into description and remediation. The Burp Suite parser walks the issues collection, maps the textual severity, joins issueBackground with issueDetail and remediationBackground with remediationDetail, and strips embedded HTML.
CSV with column mapping and autodetection
The CSV parser reads the header row, autodetects suggested mappings for title, severity, description, affected asset, and remediation, and presents the proposal for confirmation before the import runs. Numeric and textual severity scales normalise to the same five-band scale. Title is the only required mapping; rows with empty title are skipped.
Bounded import with audit trail
Imports are bounded at 20 MB per file and 500 findings per import. The bulk endpoint is rate-limited at five imports per fifteen minutes per user to keep the activity log readable. Each successful import records the actor, timestamp, engagement reference, and imported count to the activity log alongside scan executions and finding state changes.
Plan-aware quota enforcement
The import path checks the per-engagement and lifetime finding quota before the rows land. When the quota would be exceeded, the import returns the remaining headroom as a structured response rather than partially loading rows that get rejected silently. The quota reads against the workspace plan tier and inherits from the workspace subscription state.
Permission gate on the team role
The bulk import permission is required on the workspace team role. Owner, admin, and member roles carry the permission by default; viewer and billing roles do not. The import step checks the permission before parsing and returns a permission-denied response if the requesting user lacks it, so the parser does not run against files the requesting user has no authority to import.
The findings produced by the import flow into the same record as findings produced by external scans, authenticated scans, and code scans running inside the platform. The findings management feature covers the canonical finding shape the import lands against.
Related scanner discipline
Scanner import pairs with format selection upstream and triage discipline downstream. The pages below cover the surrounding decisions.
- Scanner output formats covers the format-by-format trade-offs the import inherits.
- Scanner output deduplication covers cross-tool deduplication once multiple sources land on the same engagement.
- Vulnerability scanner false positives covers the suppression discipline that pairs with post-import triage.
- Scanner coverage and limits covers what each scanner class can detect so the import volume reflects the coverage profile.
- Scan evidence retention and governance covers how long the import evidence is retained alongside the scan record.
- Scanner evidence chain from scan execution to closed finding covers the end-to-end chain that imported findings join through the import event reference, so the audit trace from imported source to closed finding stays reproducible.
- Bulk finding import feature covers the parser-by-parser capability detail, the operating limits, the RBAC gate, and the audit fields preserved on every imported finding.
- Bulk finding import workflow covers the migration pattern when a backlog of findings has to land on the engagement record.
- Scanner result triage workflow covers the per-finding triage decisions that promote drafts into canonical state.
For wider context on the multi-tool consolidation question, the security tool coverage overlap research covers how overlapping tools reconcile so the import does not produce duplicate audit chains.
Scope and limitations of this guide
Scanner import is a bounded operation, not an integration with the source scanner UI. SecPortal does not poll Nessus or Burp Suite for new scan results, does not authenticate against the source scanner, and does not stream output as the source scan runs. The export-then-import pattern is deliberate: the workspace owns the finding once the import completes, and the source scanner stays under its own licence and operating model.
Programmes that treat scanner import as one-time onboarding lose continuity with the source scanner cadence. Programmes that treat scanner import as a recurring operation (one import per scan cycle, paired with deduplication against the engagement record) keep the engagement record current with the detection without duplicating the scan itself.
Frequently Asked Questions
Sources
- Tenable, Nessus File Format Documentation
- PortSwigger, Burp Suite Issue Definitions and Reporting
- OASIS, Static Analysis Results Interchange Format (SARIF) v2.1.0
- NIST, SP 800-53 Rev. 5 (RA-5 Vulnerability Monitoring and Scanning, SI-2 Flaw Remediation)
- NIST, SP 800-115 Technical Guide to Information Security Testing and Assessment
- PCI Security Standards Council, PCI DSS v4.0 (Requirements 6.3.3, 11.3, 11.4)
- ISO/IEC, ISO 27001:2022 Annex A 8.8 Technical Vulnerability Management
- AICPA, SOC 2 Trust Services Criteria CC7.1 (System Operations)
- FIRST, Common Vulnerability Scoring System (CVSS) v3.1 Specification
- SecPortal, Findings Management Feature
- SecPortal, Engagement Management Feature
- SecPortal, Activity Log Feature
Bring Nessus, Burp, and CSV scanner output into one engagement record
SecPortal parses Nessus .nessus, Burp Suite .xml, and CSV exports into draft findings on the engagement, normalises severity across scanner scales, autodetects CSV column mappings, logs every import event in the activity feed, and lets triage promote the surviving rows into canonical findings the leadership view, the remediation queue, and the client portal read against. Free plan available.