Scanner Information
How to identify and manage SecPortal scanner traffic
What is SecPortal's Scanner?
SecPortal provides active security testing features to its users, including vulnerability scanning, SSL/TLS checks, HTTP header analysis, and attack surface discovery. These scans are non-intrusive and designed to identify security issues without causing disruption to target systems.
Scans are only performed against domains that have been verified by a SecPortal user through our domain ownership verification process. Users must prove they own or are authorised to test a domain before any scanning can take place.
How to Identify Our Scanner
All SecPortal scanner traffic can be identified by:
User-Agent
SecPortal-Scanner/1.0 (+https://secportal.io/scanner-info)Verification Requests
SecPortal-Verifier/1.0 (+https://secportal.io/scanner-info)What Our Scanner Checks
SecPortal's scanner performs non-intrusive checks including:
- SSL/TLS certificate validity and configuration
- HTTP security headers (HSTS, CSP, X-Frame-Options, etc.)
- Open port detection on common service ports
- DNS configuration analysis
- Subdomain enumeration (paid plans only)
- Known vulnerability detection based on service banners
Our scanner does not perform:
- Exploitation of vulnerabilities
- Brute-force password attacks
- Denial-of-service testing
- Data exfiltration or extraction
- Any form of destructive testing
Interpreting Scanner Output
Vulnerability scanners produce false positives by design: pattern-based detection cannot see whether a flagged condition is reachable or exploitable in the deployed asset. Our guide to vulnerability scanner false positives covers how to triage scanner output, how to suppress confirmed false positives durably, and how to drive the false positive rate down across scan cycles.
Coverage is the other side of the same conversation. Our guide to vulnerability scanner coverage and limits walks through what external, authenticated, SAST, and SCA scans actually find, where each class stops working, and how to sequence scanner output with manual testing so the report represents what was verified.
Both questions inherit from the upstream decision about scope. Our guide to vulnerability scan scoping and target selection covers how to choose targets, document inclusions and exclusions, and produce a scope record that survives audit.
Scoping decides which targets belong inside the assurance question; validation proves each one can lawfully and safely be tested before any scan traffic dispatches. Our guide to scan target validation and authorisation covers the three control points (verified ownership, legal attestation, platform blocklist), how authenticated scans add credential authorisation on top, when validation has to be re-run, and how internal security and AppSec teams operate the chain so the audit trail from scan execution to authorising human is single-record traceable.
Once scope, coverage, and false positive triage are in place, the next discipline is making sure the same finding does not appear five ways across tools and scan cycles. Our guide to scanner output deduplication covers how to merge findings across Nessus, Burp Suite, SAST, and SCA tools without losing evidence, severity, or the audit trail behind each detection.
Even a perfectly scoped scan produces unreliable output if it is silently denied at the WAF or CDN. Our guide to scanner blocking and WAF allowlisting covers where blocks land in a typical stack, how to write a narrow allowlist rule that survives audit, and how to detect partial blocks before the report ships.
The format that scanners emit decides how much evidence survives the trip into your findings record. Our guide to scanner output formats compares SARIF, Nessus XML, Burp XML, JSON, and CSV exports, covers what each one preserves, and explains how to plan format choice across a multi-tool engagement so the import keeps the evidence the consolidated finding actually needs.
Authenticated scans fail in different ways than unauthenticated ones, and most disappointing authenticated scans never actually authenticated. Our guide to authenticated scanner failure modes covers the six failure classes (wrong role, login script drift, session expiry, CSRF token rotation, MFA enforcement, SSO redirect failure), how to read the diagnostic signals in scanner logs, and how to record the authentication state so the report can defend the depth.
Cadence is the discipline that ties every scheduled scan to the next one. Our guide to vulnerability scan scheduling and baseline cadence covers how often to schedule external, authenticated, and code scans per asset class, when to override the schedule with an on-change scan, and how to read the diff between two scan baselines so the cadence drives remediation rather than producing a stack of static reports.
Reading the trend across many cycles is a different discipline than reading a two-scan diff. Our guide to scan baseline and trend comparison covers how to define a baseline that holds, how to separate real change from coverage drift, the five trend metrics that carry signal at the leadership view, and how compliance frameworks read the trend across the audit observation window.
Once cadence is set, the next operational decision is rate. Our guide to scanner rate limiting and throttling covers how to choose a starting scan rate per target class, ramp into the operating rate, handle HTTP 429 and Retry-After feedback, manage WAF and CDN burst rules, and pair findings with scan-coverage records so rate-limited truncation does not get misread as remediation.
The credential behind every authenticated scan is itself a perishable artefact. Our guide to scanner credential rotation and lifecycle covers rotation cadence by credential type, the six event classes that trigger off-cycle rotation, how to rotate without breaking scheduled scans, the difference between rotation and revocation, service account discipline, mid-scan invalidation handling, and how compliance frameworks read the credential lifecycle as evidence of operating credential management.
Once scans are running on cadence and credentials are rotating cleanly, the next governance question is how long the resulting evidence is retained and how it is disposed. Our guide to scan evidence retention and governance covers retention per artefact class (scan execution, finding, activity log, raw module output), how compliance frameworks read retention evidence (PCI DSS 10.5.1, ISO 27001 Annex A 5.33 and 8.10, SOC 2 CC7.1, NIST AU-11 and SI-12), how privacy regulations layer on top, when to dispose, when to hold, and how to operate retention as a controlled activity rather than as an annual policy refresh.
Many enterprise programmes already run scanners under existing licences (a Nessus instance owned by operations, a Burp Pro licence held by an AppSec engineer, a SAST or SCA tool already integrated into the source repository). Our guide to importing third-party scanner results covers the import workflow that turns Nessus, Burp Suite, and CSV exports into structured findings on the engagement record, including severity normalisation across scanner scales, CSV column mapping, the post-import triage that promotes drafts into canonical findings, and the audit trail the import preserves so the chain from scanner output to closed finding is reproducible.
All of the disciplines above (scoping, coverage, validation, output formats, authentication, cadence, retention, import) feed into a single operating record that an auditor or security leader reads end to end. Our guide to the scanner evidence chain from scan execution to closed finding covers the seven evidence layers (scan execution, module or rule, asset binding, source-emitted and platform-canonical values, triage transitions, remediation evidence, retest and closure binding), the six failure modes that break the chain in real programmes, and how compliance frameworks (PCI DSS 6.3.3, 11.3, 11.4, 10.5.1; ISO 27001 Annex A 8.8 and 8.15; SOC 2 CC4.1, CC7.1, CC7.2; NIST 800-53 RA-5, SI-2, AU-2, AU-12; NIST 800-218 RV.1 and RV.2; CIS Controls v8.1 7 and 8; OWASP SAMM Verification) read the chain as the technical vulnerability management control narrative.
Our Safeguards
We take the following measures to prevent misuse:
- Domain verification required: Users must prove domain ownership via DNS TXT record, file upload, or HTML meta tag before any scanning. See our domain verification feature for the full workflow.
- Legal attestation required: Users must sign a legally binding attestation confirming their authorisation to test, recorded immutably with IP and timestamp.
- Rate limiting: Scan frequency is limited by plan tier and per-domain rate limits.
- Blocklist: Government, military, critical infrastructure, and cloud provider management domains are blocked.
- Audit trail: All scan activity is logged with full attribution.
Report Unauthorised Scanning
If you believe your systems are being scanned by SecPortal without your authorisation, please contact us immediately:
Email: legal@secportal.io
Please include: the domain being scanned, approximate time of activity, and any relevant logs or evidence.
We investigate all reports within 24 hours and will immediately suspend scanning against the reported domain pending review. For more details, see our Acceptable Use Policy.