Scanner guide14 min read

Vulnerability Scan Scoping and Target Selection: A Practical Guide

Most arguments about a vulnerability scan are arguments about the scope, not the scan. A scan that hits the wrong asset is a legal problem; a scan that misses an expected asset is a delivery problem; a scan that runs at the wrong depth is a noise problem. All three start with a scope decision that was never properly recorded. This guide covers how to choose targets, document inclusions and exclusions, and produce a scope record that the tester, the customer, and the auditor can all read the same way.

The audience is anyone who runs scans for a living: penetration testing firms scoping customer engagements, internal security teams scoping their own assets, and consultancies building repeatable scan programmes across portfolios. The principles are the same; the friction is in the discipline of recording the decisions before any traffic leaves the platform.

What scan scope actually answers

Scope is not a list of domains in a kickoff email. Scope answers six questions about every asset that will see scanner traffic, and any answer that is missing from the record produces ambiguity that gets paid for later.

QuestionWhat it captures
What assetCanonical identifier: hostname, IP range with CIDR, repository URL with branch, container image with tag.
Which scan classExternal, authenticated, SAST, SCA, configuration review, or a named combination.
What authenticationNone, role-scoped, full admin, or specific token, with the credential reference rather than the secret.
What depth and ratePassive enumeration, polite request rate, aggressive depth, with concrete caps where the scanner supports them.
In or out of scopeWhether the asset is in scope, expressly out of scope, or out of scope with compensating coverage elsewhere.
Why and who approvedA one-sentence rationale and the named person who authorised the inclusion or exclusion.

Pages, paragraphs, and Slack threads do not carry these answers reliably. The fix is a structured scope record that travels with the engagement, gets versioned when scope changes, and is referenced from every scan job that runs against it.

Building the asset inventory

Scope cannot be defined against an inventory that is wrong. The first job is producing a single canonical asset list by reconciling three sources, not trusting any one of them.

Customer-provided list

The customer hands over a spreadsheet or a wiki page of assets they think are in scope. This is almost always the smallest of the three lists. Use it as the starting point, not the truth. Annotate it with what the customer expects each asset to be covered for, because that intent matters for the depth conversation later.

Technical inventory

Pulled from DNS records, certificate transparency logs, public cloud account metadata, code repositories, package manifests, and any continuous attack-surface discovery tooling already in place. This is usually larger than the customer list and includes assets the customer forgot they owned, including legacy systems, staging environments, and acquired company infrastructure.

Business inventory

Pulled from procurement, finance, IT records, and any merger or acquisition history. This finds assets the technical inventory cannot see: paid SaaS, third-party hosted services on customer-owned domains, partner-operated infrastructure, and acquired company assets that have not yet been brought into the technical inventory. The gap between business and technical inventory is where shadow IT lives.

The reconciliation step asks: which assets are in all three lists, which are in two, which are in one, and why. Every difference produces a question for the customer and a decision for the scope. Skipping reconciliation is the most common source of out-of-scope traffic and unexpected coverage gaps that surface during the report review.

Scoping by scan class

Each scan class has its own scoping vocabulary. Treating them all the same produces a scope that does not actually constrain any of them.

External (unauthenticated) scope

A list of public-facing assets reachable without credentials: domains, subdomains, public IP ranges, ports the scan is allowed to touch, and any edge-served paths that need explicit handling. Common pitfalls: forgetting to scope shared infrastructure (cloud provider control plane, third-party CDNs, DNS resolvers, payment processors) explicitly out, and assuming a wildcard subdomain rule covers assets that resolve to third-party CNAMEs.

Authenticated DAST scope

Adds the application layer behind login: roles, tenants, API keys, OAuth scopes, and the specific paths or operations the scanner may exercise. Authenticated scope nearly always needs explicit exclusions for destructive operations (delete, transfer, payment, account creation at scale) and high-rate endpoints. Test accounts must be named, with the credential reference rather than the secret on the scope record itself.

SAST scope

A list of repositories, branches, and language paths the analyser is allowed to read. Scope decisions: monorepo subprojects in or out, vendored dependencies in or out, generated code in or out, and what to do with branches that are about to be deprecated. Without those decisions, SAST results are dominated by paths nobody ships.

SCA scope

A list of dependency manifests (package.json, requirements.txt, go.mod, pom.xml, Gemfile, Cargo.toml) the scanner can read, with explicit handling for transitive dependencies and whether private registries are reachable. Scope decisions: production manifests only, or also dev and test; first-party only, or also vendored third-party. The blast radius of getting this wrong is alert volume rather than alert quality, but the result is the same: triage time spent on findings nobody will fix.

Inclusions and exclusions, recorded properly

Exclusions are the part of the scope that audits read first. An exclusion without a recorded reason and a compensating coverage statement is functionally a blind spot. Three conditions need to be on the record for every exclusion.

  • What is excluded: the asset by canonical identifier, not by general description. "Marketing infrastructure" is not specific enough; "www.example.com and the IP range 192.0.2.0/24" is.
  • Why it is excluded: a one-sentence reason that survives the engagement. Reasons that age well include third-party ownership, published deprecation date, separate compliance regime, regulatory restriction. The reason "the team did not want it tested" ages badly.
  • What compensates for the exclusion: the alternative coverage that applies to the excluded surface. Vendor security assessment, separate risk acceptance, manual testing on a different cycle, or a decision to defer until the asset is decommissioned. Without this field, exclusions accumulate into a coverage gap that nobody owns.

The pattern that fails is the running list of "out of scope" assets at the end of a kickoff document, with no rationale and no alternative coverage. The pattern that works is a scope record where every exclusion is annotated as if a regulator will ask about it, because eventually one will.

Common scoping failure modes

Five failure modes account for most of the scope-related disputes that show up in engagements. Each one is preventable in scoping rather than fixable in delivery.

Wildcard subdomains without exclusions

A scope of "*.example.com" covers test environments, partner-hosted subdomains, marketing campaigns on third-party platforms, and acquired company infrastructure that nobody intended to test. Wildcards belong on the scope only with an explicit exclusion list and a procedure for handling subdomains that appear during the engagement.

Cloud account boundaries unclear

A scope that lists a cloud account by name without specifying which assets in the account are in scope produces traffic against shared infrastructure (control plane, billing endpoints, identity provider) that the cloud provider considers out of bounds. Scope each asset by tag, region, or project, not by account.

Authentication state undefined

"Test the application" without a named role or credential reference is not a scope. Different roles see different surface, and the scan with admin credentials covers different code than the scan with a basic user role. Each role that gets tested needs its own scope entry with the credential reference, the allowed operations, and the exclusions specific to that role.

Rate limits left at default

A scanner running at default rate against a production asset can degrade service or trip rate-limiting controls that mask findings. The scope record needs concrete request-per-second caps where the scanner supports them, and a named exception process for endpoints that need lower rates than the default.

Scope frozen at kickoff

A scope that does not get revisited during the engagement misses new assets the customer has stood up, retired assets that should have been removed, and threats that have shifted since the scope was signed. Treat the scope as a versioned record, not a one-time decision, and run a mid-engagement scope review on any engagement longer than two weeks.

An operational scoping checklist

Before kickoff

  • Customer-provided asset list received and annotated with intent.
  • Technical inventory pulled from DNS, certificates, cloud, and repos.
  • Business inventory pulled from procurement, finance, and acquisition records.
  • Reconciliation document produced with named differences and questions.

At scope sign-off

  • Each in-scope asset has all six scope fields recorded.
  • Each exclusion has what, why, and compensating coverage on the record.
  • Authentication state names roles and credential references, not secrets.
  • Rate and depth caps are concrete numbers where the scanner supports them.
  • The named authoriser has signed the scope and the legal attestation.

During the engagement

  • New assets discovered during testing are paused until scope is updated.
  • Scope addenda are versioned, dated, and re-authorised by the named approver.
  • Out-of-scope traffic is investigated, logged, and reported the same day.
  • Mid-engagement scope review runs on engagements longer than two weeks.

At delivery

  • The scope record ships alongside the report so the reader can map findings to scope.
  • Exclusions are listed in the report with the rationale and compensating coverage.
  • Scope deviations during the engagement are noted with the resolution.
  • The next engagement opens with a scope diff against the previous record.

How SecPortal records scope and enforces it

SecPortal treats scope as platform state, not paperwork. The scope record sits on the engagement, the scan jobs reference it, and three enforcement points block out-of-scope traffic before it leaves the platform.

Domain ownership verification

Every external scan target must be a verified domain in the workspace. The workspace owner proves control through DNS TXT, file upload, or HTML meta tag before any scan job can target it. The verification record is durable and travels with the engagement. Domain verification covers the full mechanism.

Legal attestation

Before scanning begins, the user signs a legally binding attestation confirming authorisation for the specific assets in scope. The attestation records the IP address and timestamp immutably, so the audit trail can be reconstructed from the platform record rather than from external paperwork.

Blocklist and rate limiting

Government, military, critical infrastructure, and cloud provider management domains are blocked at the platform level regardless of any verification claim. Scan frequency, concurrency, and per-domain rate limits are bounded by plan tier so a misconfigured scope cannot trigger a destructive rate.

For the scan classes themselves, the external scanning, authenticated scanning, and code scanning features each carry their own scope settings (target lists, credential references, repository selections) that get bound to the engagement record. The scan output, the scope it ran against, and the verification evidence travel together through delivery and audit.

Where scoping connects to the rest of the engagement

Scope is the upstream decision; everything downstream inherits it. The pages below cover the parts of the engagement that fail when the scope is wrong.

Scope and limitations

Scoping is a discipline, not a tool. No platform produces a defensible scope without the up-front work of building the inventory, naming the assets, defining the depth, and recording the rationale. SecPortal holds the scope as platform state, enforces verification and attestation before scanning, and keeps the scope record traveling with the engagement; the scoping decisions themselves are human work, and the quality of the scan output depends on the quality of those decisions.

Programmes looking for an automated scoping tool usually find one of two things. The tool covers the inventory side and produces an asset list, but cannot make the inclusion or exclusion decisions that depend on business context. Or the tool wraps a template document that asks the right questions but does not bind to the actual scan jobs that follow. Both failure modes are recoverable when the scope record carries the six fields and the engagement references it; neither is recoverable when the scope is silent.

Frequently Asked Questions

Sources

  1. OWASP, Web Security Testing Guide (WSTG)
  2. NIST, SP 800-115: Technical Guide to Information Security Testing and Assessment
  3. PTES, Penetration Testing Execution Standard: Pre-engagement Interactions
  4. CREST, Penetration Testing Guide
  5. PCI Security Standards Council, PCI DSS v4.0
  6. CISA, Vulnerability Disclosure Policy Template
  7. AWS, Penetration Testing Policy
  8. Microsoft, Cloud Penetration Testing Rules of Engagement
  9. SecPortal, Domain Verification Feature
  10. SecPortal, External Scanning Feature
  11. SecPortal, Authenticated Scanning Feature
  12. SecPortal, Pentest Scope Calculator

Run scans against a scope that survives audit

SecPortal binds the scope to the engagement, enforces verification and attestation before scanning, and keeps the scope record traveling with the scan output through delivery and audit.