Bulk finding import
onboard a backlog without rebuilding it
Import a backlog of vulnerability findings from Nessus, Burp Suite, prior pentest PDFs, or any CSV onto a single engagement record. Map columns once, deduplicate against the existing catalogue, calibrate severity for the environment, and start working from a clean baseline rather than a spreadsheet stitched together by hand.
No credit card required. Free plan available forever.
Onboard a backlog of findings without rebuilding the spreadsheet
Pentest teams onboarding a new client, consultancies migrating from spreadsheets, and internal security teams consolidating vendor reports all face the same problem: a backlog of findings spread across PDFs, CSVs, scanner exports, and tracker dumps that has to land on a single record before any real work can begin. Done by hand, the migration takes days and produces a catalogue that nobody trusts. Done with bulk import, it takes one pass and produces a clean baseline that downstream triage, reporting, and remediation can build on.
SecPortal models bulk import as a first-class step on the engagement record. Source files are preserved with their original metadata, column mappings are explicit and reusable, dedup runs against the existing catalogue before findings commit, severity is preserved verbatim alongside the working calibration, and failed rows surface in an errors report rather than blocking the batch. The output is a finding queue ready for triage, not a flat list that has to be re-cleaned before it can be used.
Supported source formats
The same import workflow accepts native scanner exports and arbitrary CSVs from prior engagements or legacy trackers. Each format preserves the source-specific data that the downstream triage workflow expects.
Nessus (.nessus)
Native Tenable Nessus exports preserve the plugin ID, plugin family, plugin output, severity, CVSS vector, CVE references, and host details. Drop the .nessus file onto the engagement and each plugin entry lands as a finding with the original tool data attached for the audit trail.
Burp Suite (.xml)
Burp Scanner XML exports carry issue type, severity, confidence, request and response, and the affected URL. Imports preserve the request and response payloads on the finding so the reproduction evidence is part of the record from the moment of import.
CSV
For prior pentest exports, legacy tracker dumps, vendor reports, and ad-hoc spreadsheets, CSV import accepts any column layout. Map the columns once with the column-mapping template, save the template by source, and the next file from the same source imports without a manual mapping pass.
SAST and SCA output
Findings from the integrated code scanner (SAST via semgrep, SCA against vulnerable dependencies) land on the same engagement with file path, line number, package and version, and rule identifier preserved. The same dedup and calibration workflow applies whether the source was a DAST tool, a SAST tool, or a manual entry.
When teams reach for a bulk import
Bulk import is the right workflow when the input is a backlog rather than a single engagement-fresh scan. Four scenarios cover most of the real-world reasons teams import findings in batches.
New client with a year of prior pentest reports
A new client lands with three prior pentest PDFs covering the last twelve months. Bulk import the structured exports onto a baseline engagement so the catalogue starts with a known history rather than a blank slate. Aging, severity, and remediation context carries forward.
Migrating from a spreadsheet tracker
A consulting team has been tracking findings in Excel for two years across thirty engagements. Export the master sheet to CSV, map the columns to the SecPortal schema once, and migrate the catalogue onto engagement records without rekeying a single finding.
Migrating from a Jira-based findings tracker
A team running pentest findings as Jira issues exports the relevant projects to CSV through the Jira filter export. Map the Jira fields (summary, description, priority, custom CVSS field, status, labels) to the SecPortal schema once, and the historical findings land on engagement records with the Jira key preserved as a reference. See the SecPortal vs Jira comparison for the rest of the migration story.
Vendor report intake
A third-party vendor delivers a vulnerability assessment as a PDF and a CSV appendix. Import the CSV onto an intake engagement, deduplicate against the existing catalogue, and roll the unique entries into the live tracker so vendor work integrates with the in-house programme.
Programmatic batch from a SOAR or pipeline
A CI pipeline produces a nightly SAST and SCA digest. Land the batch on a continuous-testing engagement, dedupe against the prior nights, and only the genuinely new findings enter the triage queue. The pipeline does not flood the queue with the same regression each run.
For the buyer-side context on moving security findings off Excel or off Jira, the SecPortal vs Spreadsheets and SecPortal vs Jira comparisons walk through why teams retire those tools as their security findings hub and what the migration target looks like.
Where bulk imports usually go wrong
Five failure modes recur whenever bulk import is treated as a CSV upload rather than a structured workflow. Each one is silent at import time and loud at delivery, when the catalogue has already been used to drive a report or a remediation programme.
Bulk import without dedup
A team imports a 4,000-row CSV onto an engagement that already had 1,200 findings. With no dedup pass, the catalogue inflates to 5,200 and the same TLS finding appears under three different titles across the historical and the new data. The count is wrong and the prioritisation is impossible.
CSV columns guessed at import
A CSV with columns labelled Risk, Description, and Asset is imported with no explicit mapping. The platform guesses Risk maps to severity, but the source file used a 1-5 scale rather than CVSS, so all findings land as Critical. The first review pass is rebuilding severity rather than triaging.
Findings imported as Confirmed by default
A backlog import marks every finding as Confirmed because that was the default state in the source spreadsheet. The team ships a report against an unvalidated catalogue, the client schedules remediation against findings that nobody has reproduced, and the relationship pays the cost.
No source file kept on the record
A CSV is imported and discarded. Three months later an auditor asks where a particular finding came from and the team cannot point at the exhibit. The audit trail starts at the finding row rather than at the original file the row came from.
Severity copied verbatim from the source
Imports inherit the source severity unchanged. A scanner Default scored every missing security header as High; a prior vendor used CVSS 2 instead of CVSS 3.1; an internal team rated based on internal sensitivity rather than environmental exposure. Without a calibration pass, the new catalogue inherits three mismatched severity vocabularies in one queue.
Column mapping fields for CSV imports
The column-mapping step is what turns an arbitrary CSV into a structured finding queue. Map the source columns to these fields once per source format, save the mapping as a template, and the next file from the same source imports without manual intervention.
| Field | What to map |
|---|---|
| Title | The short name of the finding. Required. For Nessus, this is the plugin name; for Burp Suite, the issue type; for CSVs, whichever column holds the issue title. Imports without a title fall back to the description first line, with an alert flagged on the import summary so the gap is explicit. |
| Description | The narrative explanation of the finding. Markdown is preserved on import. Long descriptions are kept verbatim so the original tool wording is available alongside any rewrite the tester later applies for the deliverable. |
| Severity and CVSS vector | Map the source severity column to the SecPortal severity field, and map any CVSS vector column to the CVSS 3.1 vector field. If the source uses CVSS 2, the import flags the row for re-vector during triage rather than silently coercing the score. |
| Affected asset | Hostname, URL, package and version, file path, or asset identifier depending on the finding class. The asset is the dedup anchor; without it, dedup falls back to title and CWE which is far weaker. |
| CWE and CVE references | The CWE ID anchors the finding into the SecPortal vulnerability taxonomy; the CVE ID ties the finding to the vulnerability database for tracking advisories and exploit availability. Both columns are optional but strongly improve dedup quality and downstream report writing. |
| Status and validation state | Map the source status column (Open, Closed, Risk Accepted, Won-Fix) to the SecPortal status field. The validation state stays Unvalidated by default unless the source file carries explicit reproduction evidence; ignore source defaults that mark everything Confirmed. |
Four signals to deduplicate during import
Bulk imports without dedup are the most common reason a backlog migration produces a catalogue that nobody trusts. The import runs each source row against four layered dedup signals, flags probable duplicates on the import summary, and lets the triager confirm or reject the merge rather than committing both copies.
Asset and parameter match
The same hostname, URL, parameter, and method across an existing finding and an import row is the strongest dedup signal. The import flags the row as a probable duplicate and the triager confirms with a single click rather than reviewing two separate records.
CWE and CVE identifier match
Two findings that share a CWE or CVE reference on the same asset are almost always the same root issue. CWE and CVE are durable across tool vocabularies, so they dedup correctly even when the scanner-supplied titles differ.
Source-tool fingerprint
For Nessus, the plugin ID; for Burp Suite, the issue type ID; for SAST tools, the rule ID. When the same fingerprint appears against the same asset, the import treats the new row as an update rather than a new finding.
Title and description similarity
The fallback dedup signal when stronger keys are missing. A high-similarity match is flagged for human review rather than auto-merged so the import does not collapse genuinely distinct issues into one record.
For the deeper dedup heuristic catalogue, see the security findings deduplication guide.
Reviewer checklist for a bulk import
Before a bulk import is treated as a working baseline, the engagement lead runs through a short checklist. Each line takes seconds; missing any one of them is the source of the failure modes above.
- Source file is preserved on the engagement record with original filename, source tool, and import timestamp.
- Column mapping is explicit and saved as a template for the next file from the same source.
- Dedup pass runs against the existing engagement catalogue and the workspace history before findings are committed.
- Imported findings start as Unvalidated regardless of the source state, unless explicit reproduction evidence is attached on import.
- Severity from the source is preserved verbatim in a source-severity field, and the working severity field starts as a copy ready for calibration.
- CVSS 2 vectors are flagged for re-vector to CVSS 3.1 rather than silently coerced.
- Failed rows (missing required fields, malformed payloads) land in an import-errors report rather than blocking the rest of the batch.
- Bulk actions (assign, comment, status update, severity edit) cover the imported queue without touching findings one by one.
How bulk import looks in SecPortal
Bulk import is one workflow stitched into three feature surfaces: the engagement record, findings management, and the integrated scanners. The import is structured rather than ad-hoc, and it produces a queue that is ready for the rest of the engagement lifecycle.
Stage
Drop source files onto the engagement record. Native Nessus and Burp Suite exports import without configuration; CSVs go through the column-mapping step. Source files are preserved verbatim for the audit trail.
Map and dedupe
Column mappings are saved as templates per source. Dedup runs across findings management history before commit. Probable duplicates land on the import summary for explicit review.
Validate and ship
The imported queue feeds scanner result triage and AI reports. Confirmed findings flow into the deliverable through the branded client portal.
Where bulk import sits across the engagement lifecycle
Bulk import is the entry point for backlog-heavy engagements. It composes with the rest of the engagement lifecycle on the same record so the work that goes in once does not have to be redone at every stage.
Upstream and downstream
Bulk import takes the output of vulnerability assessment, DevSecOps scanning, and prior pentest cycles, and feeds scanner result triage with a clean, deduplicated queue.
Onboarding and closure
Imports often kick off during pentest client onboarding when a new client arrives with a backlog. Triaged findings flow into remediation tracking and retesting without a second migration step.
Pair the workflow with the long-form guides
Bulk import is operational; the surrounding guides explain the trade-offs that show up at import time. Pair this workflow with the writeup on automating findings management for workflow design, the deduplication guide for the dedup heuristics, the CVSS scoring explainer for the calibration vocabulary, and the vulnerability prioritisation framework for the prioritisation pass that follows the import.
Buyer and operator pairing
Bulk import is the workflow pentest firms, security consultants, internal security teams, and MSSPs run when a new client, a new programme, or a vendor handover lands with a backlog rather than a clean slate. The import is the entry; the rest of the engagement record carries the work forward.
What good bulk import feels like
Clean baseline
The catalogue starts on the right side of dedup, severity calibration is staged for every imported row, source files are preserved on the audit trail, and the queue is ready for triage on the same day the import lands. The next engagement reuses the template rather than rebuilding it.
Audit-ready history
Every imported finding carries the source filename, the source-tool fingerprint, the original severity, and the import timestamp. When an auditor asks where a finding came from, the answer is on the record rather than in a folder somewhere. The backlog is documented, not just consolidated.
Bulk finding import is the workflow that decides whether onboarding a backlog produces a working baseline or a noisy catalogue that the team has to clean again before any real delivery work can begin. Get it right and the migration is one structured pass that the engagement record carries forward; get it wrong and the same data lands twice, severity is mismatched, and the audit trail starts at the row instead of the source file. The goal of this workflow is to make the structured answer the path of least resistance for any team that has to onboard a backlog. When the source is a third-party penetration test report PDF rather than a raw scanner export, the upstream workflow is the third-party penetration test report intake workflow, which wraps the bulk import step in the report-level intake discipline (engagement per pentest, severity recalibration for the deployed environment, dedup against the existing catalogue, named owner from the asset ownership map, and retest binding to the original finding).
Frequently asked questions about bulk finding import
What is bulk finding import?
Bulk finding import is the workflow for landing many vulnerability findings onto a SecPortal engagement record at once. Sources include Nessus and Burp Suite native exports, prior pentest CSV appendices, vendor reports, legacy spreadsheet trackers, and SAST or SCA output. The workflow covers source-file preservation, column mapping, dedup against the existing catalogue, severity preservation alongside calibration, and bulk triage so a backlog migration produces a clean baseline rather than a noisy import.
Which source formats are supported?
Native Nessus (.nessus) and Burp Suite Scanner (.xml) exports import with full fidelity (plugin ID, severity, CVSS vector, CVE references, request and response payloads). CSV import accepts any column layout with explicit column mapping, which covers prior pentest exports, vendor reports, legacy tracker dumps, and ad-hoc spreadsheets. SAST and SCA findings from the integrated code scanner land on the same engagement and use the same dedup and triage workflow.
How does bulk import handle duplicates?
Imports run against the existing finding catalogue on the engagement and across the workspace. Duplicates are detected by asset and parameter match, CWE or CVE match on the same asset, source-tool fingerprint match (Nessus plugin ID, Burp Suite issue ID, SAST rule ID), and finally by title and description similarity. Probable duplicates are flagged on the import summary so the triager confirms or rejects the merge rather than the platform silently collapsing potentially distinct findings.
How is severity preserved during import?
The source severity is preserved verbatim in a dedicated source-severity field on each imported finding, even after calibration. The working severity field starts as a copy of the source value, ready for calibration during triage. CVSS 2 vectors are flagged for re-vector to CVSS 3.1 rather than coerced. This means the original tool rating remains visible on the audit trail while the calibrated severity is what flows into the deliverable.
Should imported findings be Confirmed by default?
No. Imported findings start as Unvalidated regardless of the source state. This is true even when the source file marks every finding as Confirmed or Open, because the platform cannot verify reproduction from a CSV row alone. The exception is when the source file carries explicit reproduction evidence (a Burp Suite XML with request and response captured, for example); in those cases the evidence attaches to the finding but the validation state still defaults to Unvalidated until a human reviewer confirms reproduction.
What happens if a row fails to import?
Failed rows (missing required fields, malformed CVSS vectors, encoding issues) land in an import-errors report rather than blocking the rest of the batch. The error report names the source file, the row number, and the reason the row failed, so the operator can correct the source data and re-run only the failed rows rather than re-importing the whole file. Successful rows from the same batch are committed in full.
How does bulk import fit with scanner result triage?
Bulk import is the file-to-engagement step. Scanner result triage is the validation step that follows. Imports preserve the source data and land findings in an Unvalidated state; triage takes them through reproduction, deduplication confirmation, severity calibration, evidence capture, and reviewer sign-off before they reach the deliverable. The two workflows are designed to compose: a clean import is the input that makes triage tractable, and triage is what turns the import into a defensible report.
Can I re-run a bulk import after correcting the source file?
Yes. Imports keep a reference to the source file and the column mapping, so re-importing a corrected version updates the previously imported rows in place rather than creating duplicates. This matters for nightly SAST and SCA pipelines, where the same source produces overlapping output night after night, and for vendor reports that arrive in revised versions over the course of an engagement.
How it works in SecPortal
A streamlined workflow from start to finish.
Stage the source files
Drop Nessus (.nessus), Burp Suite (.xml), and CSV exports onto the engagement. Each file is logged with its original name, source tool, scan ID, and import timestamp so the audit trail starts at the file, not at the finding.
Map columns once and reuse the template
For CSV imports from prior pentests or legacy trackers, map the columns to title, description, severity, CVSS vector, asset, CWE, and status fields. Save the mapping as a template so the next file from the same source imports automatically.
Deduplicate against the existing catalogue
Imports run against the existing finding history on the engagement and the workspace. Duplicates are flagged before they land so a backlog migration does not surface the same issue twice under two titles.
Validate, triage, and calibrate severity
Imported findings start as unvalidated. Reproduce, attach evidence, and calibrate the auto-imported CVSS 3.1 vector for the real environment. Bulk actions cover assignment, status, and severity edits across the queue without touching findings one by one.
Promote a clean baseline into delivery
Confirmed findings flow into AI-generated reports and the branded client portal. False positives, duplicates, and accepted risks stay on the record for the audit trail. The next scan or engagement starts from the clean baseline rather than from another raw import.
Features that power this workflow
Bulk finding import bring your scanner data with you
Vulnerability management software that tracks every finding
Orchestrate every security engagement from start to finish
Test web apps behind the login
Vulnerability scanning tools that map your attack surface
Find vulnerabilities before they ship
AI-powered reports in seconds, not days
Your brand. Your portal. Your clients love it.
Onboard a finding backlog without rebuilding the spreadsheet
Import, dedupe, calibrate, and ship from one engagement record. Start free.
No credit card required. Free plan available forever.