Bulk finding import
bring your scanner data with you
Import vulnerability findings from Nessus, Burp Suite, and CSV files onto an engagement record. Verified parsers, column-mapping autodetection for CSV, plan-aware quotas, RBAC gating, rate limiting, and a logged audit trail. Migration is a capability, not a project.
No credit card required. Free plan available forever.
Bring your existing scanner data with you
Adopting a new vulnerability management platform almost always runs into the same objection from internal security teams, AppSec teams, and GRC owners: what happens to the years of scanner history, prior pentest findings, and spreadsheet trackers that already exist? Bulk finding import is the answer to that question. The capability exists so migration is a function on the platform rather than a project outside it. Land the data on an engagement record, dedupe against the catalogue, calibrate severity for the real environment, and start working from a clean baseline rather than from a screenshot of the legacy tracker.
SecPortal supports three import paths verified in the platform: native Nessus (.nessus) parsing, native Burp Suite (.xml) parsing, and CSV with column-mapping autodetection for everything else. Each path enforces RBAC on the importer, plan and engagement quotas on the insert, a per-user rate limit on the endpoint, and an activity log entry on the workspace so the audit trail records who imported what onto which engagement and when.
Three import paths, all parser-verified
Nessus (.nessus)
The Nessus parser walks the ReportHost structure, reads each ReportItem, maps the integer severity (4 critical, 3 high, 2 medium, 1 low, 0 info) to the SecPortal five-band scale, reconstructs the affected asset from host FQDN or IP plus port and protocol, and joins synopsis and solution into the description and remediation.
Burp Suite (.xml)
The Burp parser walks the issue list, reads textual severity (High, Medium, Low, Information) and maps it to the SecPortal scale, joins issueBackground and issueDetail into the description, joins remediationBackground and remediationDetail into the remediation, and strips embedded HTML so the imported text reads cleanly in the findings record.
CSV (any column layout)
The CSV parser detects suggested column mappings against common header names. Title accepts title, name, vulnerability, finding, issue, plugin name. Severity accepts severity, risk, risk_factor, cvss, rating, priority. Description accepts description, detail, synopsis, summary, overview. Affected asset accepts asset, host, ip, target, affected, url. Remediation accepts remediation, fix, solution, recommendation. Numeric 1-3 severity is normalised; critical, crit, urgent map to critical.
Operating limits enforced on every import
Bulk import is constrained by the same governance rails that guard the rest of the finding lifecycle. The limits exist to make migration deliberate, auditable, and reversible rather than a free-for-all that floods the workspace with unverified rows.
- 20 MB file size cap per upload, enforced before the file leaves the browser
- 500 findings per import, enforced server-side after parsing
- 5 imports per 15 minutes per user, enforced through the platform rate limiter
- Plan-aware quota enforcement: per-engagement limits and lifetime totals are checked before the insert
- RBAC gate: the bulk_import permission is required on the team role of the importer
- Engagement scoping: every import lands on a specific engagement record, never a free-floating bucket
Audit evidence is preserved on every imported finding
The audit-grade chain that auditors and security leaders read against starts at the source file, not at the finding. Bulk import preserves the trail so the migrated records are indistinguishable from natively created ones for compliance evidence purposes.
Source file name and source tool
The file name and the source tool (Nessus, Burp Suite, or CSV mapping template) are recorded so the audit trail starts at the file, not at the finding.
Importer and timestamp
The activity log entry records the actor user, the engagement, and the imported count under the action key finding.bulk_imported. The trail survives even if the original file is later removed from the upload area.
CVSS 3.1 vector preservation
When the source carries a CVSS 3.1 vector string, the parsed finding stores the full vector and the auto-calculated base score. Auditors can reproduce the score from the vector at any point in the future.
Default status from engagement config
Each engagement type has a default status for newly created findings (typically draft or open). Imports respect that default, so a bulk migration does not silently mark imported findings as confirmed.
Lifetime counter increment
The workspace lifetime item counter increments by the inserted row count after the bulk insert completes. Plan-tier accounting stays accurate across regular and bulk paths.
Five migration scenarios for internal security teams
Bringing a year of Nessus history into SecPortal
Internal vulnerability management teams adopting SecPortal often have a backlog of Nessus exports across quarterly external scans. Land each export onto a baseline engagement, dedupe against the workspace catalogue, and the new platform starts with a known posture history rather than a blank slate. Aging, severity, and prior remediation context carries forward.
Migrating a Burp Suite testing programme
AppSec teams running Burp Scanner against authenticated applications can land each Burp .xml export onto the engagement that represents the application. The request and response payloads land on the finding so reproduction evidence is part of the record from the moment of import. The next authenticated scan from inside SecPortal feeds the same engagement.
Replacing a spreadsheet vulnerability tracker
GRC and vulnerability management teams running findings in Excel for years can export the master sheet to CSV, map the columns to the SecPortal schema once, save the mapping as a template by source, and migrate the catalogue onto engagement records without rekeying a single finding. The next file from the same source imports without a manual mapping pass.
Migrating from a Jira-based findings tracker
Security teams running findings as Jira issues can export the relevant projects to CSV through the Jira filter export, map the Jira fields (summary, description, priority, custom CVSS field, status, labels) to the SecPortal schema, and the historical findings land on engagement records with the Jira key preserved as a reference. Working from one record replaces working across two.
Third-party pentest report intake
Internal security teams receiving a vendor pentest deliverable as a PDF and a CSV appendix import the CSV onto an intake engagement, deduplicate against the existing catalogue, calibrate severity for the environment, and route the unique entries into the live tracker. The vendor work integrates with the in-house programme rather than living in a separate folder.
Five failure modes the platform structure prevents
Free-text re-keying that breaks the audit chain
A team copying scanner findings from a PDF into a tracker by hand loses the CVSS vector, the source identifier, and the timestamp. The audit later cannot prove which scan ran on which date or who recorded which severity. Bulk import preserves the source data on the finding so the chain survives.
Imports landing as Confirmed by default
A platform that confirms imported findings on insert robs the team of the triage step. Imports in SecPortal respect the engagement default status (typically draft or open), so the migration produces a triage queue, not a fait accompli.
CSV columns guessed silently
A CSV with columns labelled Risk, Description, and Asset imported with no explicit mapping can land in the wrong fields. The CSV parser surfaces suggested mappings for confirmation before the import runs so the team verifies the schema once rather than rebuilding the catalogue afterward.
No rate limit producing audit-trail floods
A scripted client hitting the bulk endpoint hundreds of times per minute floods the activity log with noise the auditor has to wade through. SecPortal caps bulk imports at five per fifteen minutes per user, keeping the trail readable.
No RBAC gate producing unsanctioned writes
A viewer-role user able to push 500 findings into a finding catalogue is a governance defect. The bulk_import permission is gated through the team role so import authority is an explicit, assignable capability and not an implicit side effect of an account.
How bulk import fits the rest of the platform
Findings catalogue alignment
Imported records use the same findings management structure as natively created findings. Severity, CVSS vector, evidence, remediation guidance, and status transitions all behave identically once the row lands.
Activity log audit trail
Every bulk import writes a activity log entry under the action key finding.bulk_imported with the actor, the engagement, and the imported count. The CSV export of the activity log surfaces the migration trail in any audit window.
Engagement scoping
Imports always land on an engagement record, never a free-floating bucket. The migration is a deliberate attachment to a scope, not a dump into the workspace at large.
Document evidence storage
Original source files (the Nessus export, the Burp XML, the CSV) can be attached to the engagement through document management so the source artefact survives alongside the parsed findings for the audit trail.
Where to read next
For the operational workflow that wraps the import (staging, mapping, dedup, triage, baseline promotion), see the bulk finding import use case.
For the per-tool format trade-offs and the parser-by-parser technical reference, see importing third-party scanner results and scanner output formats.
For the cross-tool deduplication policy that runs after import, scanner output deduplication covers the dedup theory and the tie-break rules.
For internal security teams operationalising third-party pentest deliverables, the third-party penetration test report intake use case wraps bulk import in the named-owner routing and retest binding the intake demands.
For end-to-end scanner provenance from scan execution through closure, the scanner evidence chain explains how imported findings keep the source link intact through triage, remediation, and retest.
Move your existing findings into SecPortal without rekeying them
Drop the file. Map the columns once. Land the catalogue on an engagement record. Triage the duplicates. Start working from a clean baseline.
No credit card required. Free plan available forever.