Scanner guide14 min read

Vulnerability Scanner Coverage and Limits: What Scanners Find

Coverage is the most overstated metric in vulnerability scanning, because the headline number flatters the surface and hides the classes the scanner cannot reach. A scanner that walks 95% of routes, runs 100% of its rule pack, and produces a clean report is not a 95% coverage scanner; it is a scanner whose coverage envelope is shaped by the rules it has, the surface it can reach, and the context it cannot model. The difference matters when the report ships to a client, when an auditor asks what was excluded, or when a finding shows up in production that the scanner technically could not have caught.

This guide walks through the four scanner classes that do the work in commercial pentesting, what each one covers and what it cannot, how to read coverage as three separate signals rather than one flat percentage, where authenticated and external scanning stop being adequate, and how to sequence scanner output with manual testing so the report represents what was actually verified.

The four scanner classes and their coverage envelopes

Most commercial scanning splits across four classes. Each one detects by a different mechanism, reaches a different surface, and fails in a different place. Treating them as interchangeable produces reports where the headline coverage looks healthy and the actual gaps are invisible.

ClassWhat it coversCommon blind zone
External (unauthenticated DAST)TLS, security headers, exposed ports, DNS posture, subdomain enumeration, banner-based CVE correlation, public path discovery, email authentication records.Anything behind login, business logic, application-layer access control, internal services, multi-step exploits.
Authenticated DASTInput validation, session handling, common injection classes, OWASP Top 10 categories within crawl reach, authenticated path discovery, parameter fuzzing.Business logic, multi-step authorisation, IDOR variants requiring authorisation modelling, single-page application client state, multi-tenant role transitions.
SAST (static analysis)Dangerous code patterns, taint flows, hardcoded secrets, common injection sinks, deserialisation patterns, language-specific anti-patterns.Runtime configuration, deployment context, framework-specific behaviour the rules do not model, dynamic dispatch, reflection-heavy code.
SCA (composition analysis)Known-CVE dependencies in supported manifests, transitive dependency tracking, license posture, vulnerability database matching.Reachability of vulnerable code at runtime, vendored dependencies outside the manifest, custom forks, vulnerabilities not yet catalogued.

The coverage envelope of a scanning programme is the union of these four classes against the inventory they were pointed at. The classes the programme does not run define the surface that depends entirely on manual testing. A scanning programme that runs only external DAST and ships the report as a security assessment is implicitly claiming the authenticated, code, and dependency surface either does not exist or has been covered some other way. That claim has to be defensible.

Coverage as three signals, not one number

A single coverage percentage is the metric most often quoted in scanner output and the metric that hides the most. The defensible reporting pattern decomposes coverage into three distinct signals, because they answer different questions and have different failure modes.

Surface coverage

The share of the in-scope inventory the scanner actually reached. For DAST, the ratio of crawled routes to known routes (or to a route inventory derived from logs, an OpenAPI spec, or a sitemap). For SAST, the ratio of analysed lines to repository lines, broken down by language. For SCA, the ratio of resolved manifests to known manifest paths. Surface coverage below 80% is usually evidence the scanner is not seeing the asset; surface coverage near 100% in a complex application is usually evidence the inventory was incomplete, not that the scan was thorough.

Detection coverage

The share of vulnerability classes the scanner has rules or signatures for, measured against an external benchmark such as OWASP WSTG, OWASP ASVS verification levels, or the OWASP API Security Top 10. A scanner that runs every rule it has but is missing rules for whole categories has perfect rule-pack coverage and incomplete detection coverage. The two get conflated routinely; separating them forces a conversation about what the rule pack does not include.

Verified coverage

The share of scanner-emitted findings that survived triage, were validated by a tester, and made it onto the deliverable. A high raw-finding count with low verified coverage is a scanner producing noise; a low raw-finding count with high verified coverage is a scanner that found a small number of real issues, which is also the shape a mature engagement usually has. Reporting verified coverage forces the conversation onto the work the team actually did rather than the work the scanner claimed.

Where each class stops working

Knowing where a scanner class loses coverage is more useful than knowing what it covers at its best, because the gaps are where exposures actually live. The patterns below show up across most engagements and are worth recording on the scan record before the report is written.

External scans on assets that need authentication

The classic gap. The scanner walks the public marketing site, scores TLS and headers, lists open ports, and ships a report that names a few low-impact informational items. The application behind login is untouched, and a buyer reading the report assumes coverage extended to the authenticated surface. The fix is naming external-only scope explicitly in the report and pairing it with an authenticated scan or a manual engagement on the authenticated surface.

Authenticated DAST on heavy single-page applications

The crawler reaches the login, the framework state hides the rest, and the route tree the scanner builds is a small fraction of what users actually navigate. Coverage looks fine on the scanner is own metric (high crawl coverage of routes it discovered) and poor against the route inventory the application actually serves. The fix is supplying a route list (sitemap, OpenAPI, navigation map) and verifying the scanner reached the routes that matter.

SAST on dynamic and reflection-heavy code

Static analysis loses traction on code that resolves types, methods, or templates at runtime, on heavy use of metaprogramming, and on framework patterns the rule pack does not model. The scanner emits low counts and the engineering team reads that as a clean codebase. The fix is supplementing SAST with a manual code review on the sensitive paths and recording on the scan record which language and framework patterns were not analysed.

SCA on transitive and vendored dependencies

SCA tooling reads manifests well; it reads vendored copies of libraries, monkey patches, and forks badly. Transitive dependencies pulled by build tools that the scanner does not understand are invisible. The fix is verifying which manifest paths were resolved, listing the package managers and lock files in scope, and flagging vendored directories for manual inspection.

Every class on business logic

No scanner class meaningfully covers business logic flaws. Price manipulation, workflow bypass, multi-step authorisation issues, and chained exploits across otherwise low-severity findings all sit outside the detection envelope of all four classes. The fix is a manual testing phase scoped against the application is business model rather than its rule-matchable surface.

Sequencing scanner output with manual testing

The strongest engagements sequence scanner output and manual testing rather than choosing between them. The sequence below is the shape that produces a deliverable where the coverage claim is defensible and the manual testing time goes to the surface the scanner cannot reach.

  1. Run the scanners early. External and authenticated DAST, SAST, and SCA all run in parallel against the in-scope inventory. Output lands on the engagement record as draft findings rather than going directly to the report.
  2. Triage scanner output. Each finding gets reproduced, validated, suppressed with a reason, or kept as informational. The scanner is contribution is the shortlist, not the deliverable.
  3. Identify the blind zone. The classes the scanners cannot cover (business logic, multi-step authorisation, chained exploits) get scoped against the application is functional surface. The blind zone is where the tester is time goes.
  4. Run manual testing in the blind zone. The manual phase produces findings the scanners structurally cannot. Those findings are tagged separately on the engagement record so the report can show the reader where each finding came from.
  5. Report by source. The deliverable decomposes findings by source (scanner-derived, manually discovered) so the buyer can read the testing depth honestly. Mixing sources without distinction obscures the work.

Sequencing this way is also the structure that survives audit. The scan record names the scope, the depth, the authentication state, and the exclusions. The manual testing record names the blind zone the manual phase was scoped against. Together they form a coverage envelope that can be defended on the next renewal cycle.

How compliance frameworks read coverage

Auditors do not read scanner coverage as a single number. They read it as a paired statement of scope and execution: what was in scope, what depth of scan was run, what authentication state was used, what was excluded, and how the findings flow into remediation and verification. Several frameworks anchor that expectation.

  • PCI DSS v4.0 expects internal and external vulnerability scanning at defined intervals, with critical and high vulnerabilities addressed and re-scanned. The scope of the scan and the authentication state are part of the evidence; a scan record without those is a finding in the audit itself.5
  • ISO 27001:2022 Annex A 8.8 (management of technical vulnerabilities) expects vulnerabilities to be identified and addressed. The audit reads scanner output without scope and exclusion records as incomplete evidence; coverage is a programme statement, not a scanner statistic.6
  • SOC 2 trust services criteria (CC4 in particular) expect monitoring activities to identify deficiencies. A scanning programme without recorded coverage gaps is read as either incomplete monitoring or missing documentation; both are reportable.7
  • NIST SP 800-115 frames technical security testing as one component of a wider assessment, with scanner output explicitly named as input rather than conclusion. Programmes that treat scanner output as the deliverable are not aligned with the underlying methodology guidance.3

None of those frameworks mandate a specific scanner class. They all expect the programme to define the coverage envelope, justify it, and produce evidence of execution. The structural fix is recording scope, depth, authentication state, and exclusions on the scan record, on the engagement record, and on the deliverable.

An operational checklist for coverage you can defend

At scope definition

  • The asset inventory is current and explicit, not derived from a stale export.
  • External, authenticated, code, and dependency scope are named separately.
  • Authentication mechanisms (login, tokens, SSO) are listed where applicable.
  • Out-of-scope assets are named explicitly so the report can defend the boundary.

At scan execution

  • Scan depth, rate, and authentication state are recorded on the scan record.
  • Surface coverage is measured against the inventory, not against the crawl.
  • Detection coverage is checked against an external benchmark such as OWASP WSTG or ASVS.
  • Failed or skipped modules are recorded with a reason.

At triage and manual testing

  • Scanner findings are triaged before the manual phase scope is finalised.
  • The manual phase is scoped against the blind zone the scanners cannot reach.
  • Scanner-derived and manually discovered findings are tagged separately.
  • Verified coverage is the metric the report leads with, not raw finding count.

At delivery

  • The report names the scanner classes used and the surface they covered.
  • The exclusions and unreached classes are stated explicitly.
  • Findings are grouped by source so the testing depth is readable.
  • The renewal conversation can reference the coverage envelope, not a single number.

How SecPortal records scanner coverage

SecPortal runs scanner classes against a single engagement record so the coverage envelope is explicit on every deliverable. External scanning covers TLS, headers, ports, DNS, subdomains, cloud exposure indicators, and email authentication across sixteen modules. Authenticated scanning covers authentication, session handling, input validation, and API checks across seventeen modules. Code scanning runs SAST and SCA against repositories connected through GitHub, GitLab, or Bitbucket.

The output from every scanner class lands as draft findings on the engagement record rather than going directly to the report. Triage happens through the scanner result triage workflow, so the verified findings, the suppressed false positives, and the informational observations are separable on the deliverable. The scanner false positives guide covers how to drive verified coverage up by tightening triage discipline, and the scan scoping and target selection guide covers the upstream scope decisions that shape what the scanner sees in the first place.

Coverage holds up over time only when the cadence between scans is tuned to the asset and the change axis. The scan scheduling and baseline cadence guide covers per-asset cadence, change-triggered scans, and how to read the diff between two scan baselines so a coverage drop is not misread as remediation.

For the broader feature surface, the external scanning feature covers the unauthenticated modules, the authenticated scanning feature covers the application-layer modules, and the code scanning feature covers SAST and SCA. The findings management feature holds the audit trail so scope, depth, authentication state, and exclusions stay on the record alongside the findings. When an authenticated scan reports completion but the route inventory looks unauthenticated, the authenticated scanner failure modes guide covers the six failure classes that account for most of the gap and how to record the authentication state on the scan.

For the engagement-level view of how scanner output feeds into report delivery and retests, the penetration testing use case covers the kickoff-to-delivery cycle, and the continuous penetration testing workflow covers ongoing programmes where coverage is measured across cycles rather than on a single engagement.

Related vulnerability classes that depend on scanner depth

Some vulnerability classes are reliably found by scanners; others sit at the edge of scanner coverage and routinely need manual verification. The pages below describe the classes where the difference between scanner-derived and manually discovered findings is largest.

For the wider picture of how scanner-derived findings get triaged into a deliverable, the authenticated versus unauthenticated scanning blog covers the practical depth difference, and the severity calibration research covers how to score scanner-derived findings consistently against CVSS and SSVC.

Scope and limitations of this guide

Scanner coverage is a programme discipline, not a tool feature. No scanner produces complete coverage on its own; no platform turns incomplete scanner output into a complete engagement. The work that closes the gap is human triage, manual testing against the blind zone, and explicit recording of scope and exclusions on the deliverable. SecPortal holds the record so the coverage envelope is durable and auditable; the testing itself is the work the team does.

Coverage claims that depend on a single percentage almost always overstate the engagement. Coverage claims that decompose into surface, detection, and verified signals, that name the scanner classes used and the classes excluded, and that pair scanner output with a manual phase against the blind zone are the claims that survive a renewal review and an audit.

Frequently Asked Questions

Sources

  1. OWASP, Web Security Testing Guide (WSTG)
  2. OWASP, Application Security Verification Standard (ASVS)
  3. NIST, SP 800-115: Technical Guide to Information Security Testing and Assessment
  4. NIST, SP 800-53 Rev. 5: Security and Privacy Controls
  5. PCI Security Standards Council, PCI DSS v4.0
  6. ISO/IEC, ISO 27001:2022 Information Security Management
  7. AICPA, SOC 2 Trust Services Criteria
  8. PTES, Penetration Testing Execution Standard
  9. OWASP, API Security Top 10
  10. CISA, Known Exploited Vulnerabilities Catalog
  11. SecPortal, External Scanning Feature
  12. SecPortal, Authenticated Scanning Feature
  13. SecPortal, Code Scanning Feature
  14. SecPortal, Scanner Result Triage Use Case
  15. SecPortal, Scanner False Positives Guide

Run scanners on a record that names what they covered

SecPortal runs external, authenticated, and code scanning on a single engagement record, holds scope and exclusions on the scan, and keeps scanner-derived and manually discovered findings separable on the deliverable.