Security Tool Coverage Overlap: Mapping the Scanner Stack
Coverage overlap is the catalogue-level question of which weakness classes each tool actually inspects, where two or more tools inspect the same class, and where the stack leaves a class uninspected. Most enterprise programmes assemble SAST, SCA, DAST, container, IaC, secrets, ASM, pentest, and bug bounty by category and then report coverage as the union of installed tools. The union view hides both the duplicate signal (two or three tools inspecting the same class without a canonical-record rule) and the silent gaps (entire weakness classes that no tool inspects on a given asset type). The disciplined view is a matrix: weakness class along the rows, tool category along the columns, one of four states per cell. The matrix view changes the question from "do we have SAST" to "which classes are silent in our stack today and which tool category closes them".1,2,3,4
This research lays out how coverage overlap behaves across the standard nine-category enterprise stack. It covers what each tool category structurally covers and structurally cannot reach, the six gap patterns that recur across enterprise programmes, the matrix construction discipline that survives an audit and a tool rotation, the procurement read of the matrix, and the operating discipline that keeps the matrix accurate between procurement cycles. The argument is not that more tools cover more classes; the argument is that the right tools, with the canonical-record-per-class rule applied, cover more classes than a longer list of overlapping tools without the rule.5,8,16,17,18
Coverage overlap is a class-level question, not a tool-list question
Coverage overlap is read at the catalogue level rather than at the run-time level. The catalogue level asks which CWE classes a tool category structurally inspects and which it structurally cannot reach. The run-time level asks whether two scanner outputs describe the same instance: same parameter, same endpoint, same scan window. Both questions matter, but they are different questions and they have different answers. Programmes that conflate them tend to solve the wrong problem first.
The catalogue-level coverage matrix is built per asset type (web application, API, mobile application, container workload, infrastructure as code, identity and access surface) because the same scanner provides different coverage on different surfaces. A SAST run against a Node.js web application covers a different set of CWE classes than the same SAST run against a Go service, and both are different from the same SAST run against a Python data pipeline. The matrix that survives is asset-type-specific rather than aspirational.1,2,3
The run-time overlap question (which tool is the canonical record for a given finding instance) is downstream of the catalogue-level question. Programmes that solve the run-time question first end up with a clean queue and a stack with silent gaps; programmes that solve the catalogue question first end up with a defensible map and a queue that needs cleanup. The disciplined sequence is matrix first, then deduplication, then canonical-record routing. The security finding deduplication economics research covers the carrying-cost-versus-discipline-cost ledger that anchors the deduplication investment argument once the catalogue matrix is in place.
The nine-category enterprise stack
The standard enterprise security testing stack is built from nine tool categories, each covering a structurally different surface. The categories overlap by design: SAST and DAST cross-check at the injection-class boundary; SCA and container scanning cross-check at the CVE-keyed dependency boundary; pentest and bug bounty cross-check at the human-judgement boundary. The cross-checks are deliberate and useful when the canonical-record rule is applied; the cross-checks become noise when no canonical record exists.3,4,7,17,18
| Category | What it inspects | Structural limit |
|---|---|---|
| SAST | First-party source code or compiled bytecode against rule sets that map to CWE classes (injection, output handling, crypto API misuse, input validation, code-resource handling). | Cannot reliably see runtime behaviour, configuration-derived behaviour, or production-data-dependent classes. |
| SCA | Dependency manifest against CVE, GHSA, OSV, vendor advisory databases. Produces CVE-keyed findings and licensing notes. | Cannot inspect first-party code, runtime-only dependencies, or transitive dependencies the manifest does not list. |
| DAST | Running application via HTTP request and response inspection. Covers runtime injection, authn/authz flows, configuration weaknesses, header and TLS configuration, boundary rate-limiting. | Cannot see source code, worker processes, scheduled jobs, or stored injection chains that need follow-up requests to surface. |
| Container scanning | CVE-keyed findings on operating-system packages, language runtimes, base images, plus image-manifest privilege configuration. | Overlaps with SCA on application dependencies inside the image; does not see runtime image behaviour. |
| IaC scanning | Configuration weaknesses in Terraform, CloudFormation, ARM, Bicep, Kubernetes manifests, Helm charts, Docker Compose. | Inspects the declared configuration, not the deployed configuration after drift; does not see runtime behaviour. |
| Secrets scanning | Hard-coded credentials and cryptographic keys in commits, branches, build artefacts, and (sometimes) running processes. | Cannot validate whether the secret is currently active without secondary verification. |
| ASM / EASM | External attack surface discovery: certificates, subdomains, exposed administrative interfaces, forgotten test environments, third-party services on owned domains. | Discovery only; reads the asset axis, not the weakness-class axis. |
| Penetration testing | Scoped, time-boxed, methodology-anchored coverage of business-logic weaknesses, chained vulnerabilities, insecure-design classes, context-specific exploitation. | Coverage is bounded by scope, time, and tester focus rather than by technology. |
| Bug bounty / disclosure | Continuous, broader, less methodologically-constrained discovery; long-tail residual coverage that scoped testing and automated scanning miss. | Coverage is opportunistic rather than systematic; canonical-record assignment requires triage discipline. |
The matrix-level question is which categories are present in the stack, what each category inspects on each asset type, and which weakness classes are not covered by any category in the stack. The categories are not interchangeable, and adding a second tool inside the same category usually does less for the silent-gap question than adding one tool in a category the stack does not yet have.5,8,16
SAST coverage in detail
SAST inspects source code or compiled bytecode against rule sets. The coverage is strongest where the weakness pattern is visible in source and weakest where the weakness depends on runtime, configuration, or production data.17
Strong coverage classes
Injection patterns (CWE-89, CWE-78, CWE-77, CWE-94), output-handling classes (CWE-79, CWE-91, CWE-643), cryptographic API misuse (CWE-327, CWE-328, CWE-329, CWE-326), input-validation classes (CWE-20, CWE-22), and code-resource handling (CWE-400, CWE-401, CWE-415 in compiled languages). Authentication and session-management classes are visible in source when the implementation is in-tree (CWE-287, CWE-384, CWE-352 in some configurations).
Weak coverage classes
Authentication-state-dependent flows, configuration-derived behaviour, deployment-context-derived behaviour, and findings that depend on production data (CWE-200, CWE-201, CWE-209) are not visible to SAST without runtime context. Business-logic weaknesses (CWE-840, CWE-841) and insecure-design classes (CWE-1008, CWE-1018, CWE-1019) are also weakly covered.
Where SAST overlaps with other tools
SAST overlaps with DAST on injection classes that are visible in both source and runtime. SAST overlaps with secrets scanning on hard-coded credential classes (CWE-798, CWE-321). SAST overlaps with container scanning on first-party code that ships into the image. The canonical-record rule is usually that SAST owns the source-pattern read and DAST owns the runtime-confirmation read.
Operational considerations
Coverage on a given codebase depends on the language coverage of the rule set, the rule customisation discipline (default rule sets miss organisation-specific patterns; custom rule sets that drift from upstream miss new classes), and the integration model (commit-time, pre-merge, scheduled). The SAST versus SCA explainer covers the operational shape in more detail.
SCA coverage in detail
SCA inspects the dependency manifest against vulnerability databases. SCA findings are CVE-keyed rather than CWE-keyed, which makes the matrix mapping less direct than for SAST and DAST.19
Strong coverage classes
CWE-1104 use of unmaintained third-party components, CWE-1352 (the SBOM-lineage class), CWE-937 OWASP top 10 known vulnerable components, and CVE-keyed findings for declared first-party dependencies. Licensing flagging where SCA also runs as a software composition compliance tool.
Weak coverage classes
First-party code (handed to SAST), runtime-only dependencies that are not declared in the manifest, transitive dependencies that the build resolves at install time, and the reachability question (whether the vulnerable function is actually called by the application). Reachability accuracy varies significantly between tools and configurations.
Where SCA overlaps with other tools
SCA overlaps with container scanning on application dependencies that ship inside the image; the two tools usually disagree on which is the canonical record (SCA reads the manifest, container scanner reads the image). SCA overlaps with secrets scanning on hard-coded credentials in dependency configuration. The canonical-record rule is usually that SCA owns the manifest-keyed read and container scanning owns the image-keyed read for the same CVE.
Operational considerations
Coverage depends on database freshness (NVD lag, GHSA latency, vendor-advisory currency), on the manifest accuracy (lockfiles, Pipfile.lock, package-lock.json, yarn.lock vs unpinned upper bounds), on the tool reach into transitive dependencies, and on whether the SBOM (CycloneDX, SPDX) is a build artefact or a static document. The SBOM guide covers the lineage discipline that makes SCA defensible.
DAST coverage in detail
DAST inspects a running application by sending requests and observing responses. Coverage is strongest where the weakness manifests in HTTP behaviour and weakest where the weakness lives below the boundary or depends on chained flows.18
Strong coverage classes
Runtime injection (CWE-89 visible in response, CWE-79 reflected XSS, CWE-78 visible in response or timing), authentication and authorisation flows (CWE-285, CWE-639, CWE-862, CWE-863, CWE-352, CWE-384), configuration weaknesses (CWE-16, CWE-756, CWE-200 visible in responses), TLS and HTTP header configuration, and rate-limiting weaknesses visible at the boundary (CWE-799, CWE-307).
Weak coverage classes
Source-code-pattern weaknesses (handed to SAST), worker-process behaviour, scheduled-job behaviour, stored injection chains that need follow-up requests, configuration not exposed through the request surface, and business-logic chains across multiple endpoints that exceed the test harness scope.
Where DAST overlaps with other tools
DAST overlaps with SAST on injection classes that are visible in both source and runtime. DAST overlaps with pentest on authn/authz flows where the test harness models role transitions but the pentest digs deeper into multi-step chains. DAST overlaps with ASM on exposed-administrative-interface classes where ASM identifies the asset and DAST inspects it. The canonical-record rule is usually that DAST owns the runtime-context read and SAST owns the source-pattern read for the same class.
Operational considerations
Authenticated DAST covers a structurally different class set than unauthenticated DAST; the difference is large enough that the matrix has to register the two as separate columns. Authenticated versus unauthenticated scanning covers the operational distinction. The DAST guide covers the coverage and limit picture in more detail.
Container, IaC, and secrets coverage in detail
Container, IaC, and secrets scanning each cover a deployment-time slice that the source-code and runtime tools do not see. The three tools are sometimes packaged together as cloud security posture management (CSPM) or cloud-native application protection platform (CNAPP), but the coverage matrix reads each as a separate column.
Container scanning
Strong on operating-system package CVEs, language runtime CVEs, base-image CVEs, and image-manifest privilege configuration (CWE-1104, CWE-1240, CWE-272). Weak on runtime image behaviour and on first-party code inside the image (handed to SAST). Overlaps with SCA on application dependencies.
IaC scanning
Strong on configuration classes in declarative IaC (CWE-732, CWE-913, CWE-547, CWE-1104 against module references). Weak on the deployed configuration after drift, on imperative changes, and on runtime behaviour. Overlaps with container scanning on container-deployment manifests.
Secrets scanning
Strong on CWE-798 hard-coded credentials and CWE-321 hard-coded cryptographic keys across commits, branches, build artefacts, and (sometimes) running processes. Weak on whether the secret is currently active without secondary verification. Overlaps with SAST on credential-handling source patterns.
Pentest, bug bounty, and disclosure coverage in detail
Pentest, bug bounty, and coordinated disclosure cover the human-judgement classes that automated scanning structurally cannot reach. They are not optional add-ons to the automated stack; they are the only categories that cover a meaningful slice of the weakness universe.10,11
Penetration testing
Scoped, time-boxed, methodology-anchored coverage of business-logic weaknesses (CWE-840, CWE-841), chained-vulnerability flows (combinations no single scanner sees), insecure-design classes (CWE-1008, CWE-1018, CWE-1019), and context-specific exploitation reasoning. Coverage is bounded by scope, time, and tester focus.
Bug bounty
Continuous, broader, less methodologically-constrained discovery surface. Tends to surface novel chains and edge-case business-logic weaknesses faster than scoped pentests. Triage discipline matters for canonical-record assignment because the same class may surface in multiple disclosures from multiple researchers.
Coordinated disclosure
Covers the long-tail residual that neither automated scanning nor scoped human testing reaches. The vulnerability disclosure programme guide covers the operating shape that turns disclosure from incident handling into a coverage-axis contributor.
ASM and EASM cover the asset axis, not the weakness axis
Attack surface management is the only category in the stack whose coverage gap is a missing asset rather than a missing weakness class. ASM expands the asset surface that the other tools then inspect: it finds certificates, subdomains, exposed administrative interfaces, forgotten test environments, and third-party services pointing at organisational domains.
The coverage matrix has to register ASM along the asset axis (which assets are in scope) rather than along the weakness-class axis (what is wrong with each asset). Programmes that report ASM coverage as though it were weakness-class coverage end up with a longer asset list and the same silent-gap pattern as before, because no other tool was added to inspect the new assets.
The disciplined operating model is that ASM-discovered assets feed the rest of the stack: SAST and SCA for source repositories ASM surfaces, container scanning for images ASM surfaces, DAST for endpoints ASM surfaces, pentest for the residual. Programmes that close the discovery-to-inspection loop usually have higher reported finding counts and lower aged-queue debt than programmes that report ASM findings without cross-routing them into the inspection categories.
Six gap patterns that recur across enterprise stacks
Six gap patterns recur in coverage matrices built by enterprise programmes. Each pattern is a class family that no automated tool covers in standard configurations. Programmes that name these six gap patterns in the matrix surface the silent gaps that programme-level coverage reporting otherwise hides.
1. Insecure-design classes
CWE-1008 architectural concepts and the related insecure-design family sit between SAST (which does not see design) and DAST (which sees only the implemented surface). Usually only covered when pentest is in scope and the methodology includes design review.
2. Authorisation chains across endpoints
CWE-285, CWE-862, and CWE-863 are partially covered by DAST when the test harness models role transitions across the endpoint set, and almost never covered when DAST runs unauthenticated or against a single role. Pentest is the usual safety net.
3. Authenticated-state injection
Stored XSS, second-order SQL injection (CWE-89 with CWE-471), and cross-request injection chains sit between unauthenticated DAST and SAST that cannot trace cross-request flows. Authenticated DAST plus pentest covers the gap when both are in scope.
4. Race conditions
CWE-362 and CWE-364 are visible to neither SAST nor DAST in standard configurations. Specialised tooling, pentest, or design review covers the gap. Programmes that ship financial or stateful workflows without race-condition coverage are usually carrying silent risk.
5. Cryptographic protocol weaknesses
CWE-310 family weaknesses below the API surface are visible to SAST when the API is misused but invisible when the misuse is at the protocol level. Cryptographic review or pentest with crypto focus covers the gap.
6. Supply-chain weaknesses below the manifest
Build-time injection (CWE-1357, CWE-1265) is visible to SCA only when the manifest accurately reflects the build, which transitive resolutions and runtime-installed dependencies often break. Reproducible builds, signed artefacts, and SBOM lineage discipline cover the gap.
Programmes that report stack coverage as the union of installed tools without naming these six gap patterns are reporting coverage they do not actually have. The matrix-level discipline names the six and assigns coverage responsibility (or accepts the gap) per asset type rather than treating them as edge-case footnotes.2,3,4,7
Building a defensible coverage matrix
A defensible coverage matrix has two axes: weakness class along the rows (CWE classes grouped by family rather than per-CWE because per-CWE rows do not survive maintenance) and tool category along the columns. Each cell records one of four states.
| State | Definition |
|---|---|
| Primary coverage | This tool category is the canonical record for this class on this asset type. Findings from other tools defer to this canonical record. |
| Secondary coverage | This tool category sometimes catches this class but is not the canonical record. Findings here are deduplicated against the primary record. |
| Out of scope | This tool category does not inspect this class on this asset type. The cell is empty by design rather than by accident. |
| Silent gap | No tool category in the stack covers this class on this asset type. The gap is named, not hidden. |
- Build the matrix per asset type (web app, API, mobile, container, IaC, identity surface) rather than as one global matrix.
- Group rows by CWE family rather than per-CWE so the matrix survives maintenance cycles.
- Mark every cell with one of the four states; an unmarked cell is a hidden silent gap.
- Anchor the column set to the categories the stack actually has, not the categories on the procurement wishlist.
- Review the matrix when a tool is added, removed, or repositioned, not only at procurement cycles.
- Pair each silent-gap cell with the category that would close it (pentest, bug bounty, specialised tool) and the procurement decision recorded against it.
The matrix is most useful when it is concrete enough to argue from in a procurement conversation. Programmes that publish a list of CWE families rather than a per-asset-type matrix end up arguing for tool categories from anecdote rather than from a structured read. Programmes that publish the per-asset matrix get the budget conversation onto the same evidence as the audit conversation.5,8,16
How the coverage matrix reads against compliance frameworks
Compliance frameworks read coverage as a programme-discipline question rather than as a list of tools. The matrix is the artefact that makes the framework reads defensible because it surfaces both the coverage claim and the silent-gap acceptance on the same page.
| Framework | Where it reads coverage |
|---|---|
| PCI DSS v4.0 | Requirement 6 expects secure software development with vulnerability identification (6.3); Requirement 11 expects vulnerability scans and penetration tests (11.3, 11.4). The matrix shows which scanner-and-pentest combination claims primary coverage on each class against scoped assets.13 |
| ISO 27001:2022 | Annex A 8.8 expects documented technical vulnerability management; A 8.25 to A 8.30 expect secure development discipline. The matrix shows the asset-and-class coverage that A 8.8 audits read.14 |
| SOC 2 (TSC 2017) | CC7.1 expects ongoing detection across the audit observation period. The matrix is the artefact that shows whether detection is a stack-wide capability or a single-tool claim.15 |
| NIST SP 800-53 Rev. 5 | RA-5 (Vulnerability Monitoring and Scanning) and SA-11 (Developer Security Testing) define the control surface. The matrix maps the controls to the tool categories that satisfy them on each asset type.9 |
| NIST SSDF (SP 800-218) | PW.7 (Review and analyse human-readable code), PW.8 (Test executable code), and RV.1 (Identify and confirm vulnerabilities on an ongoing basis) read the matrix as the operating evidence of the practices.8 |
| NIST CSF 2.0 | DE.CM (Continuous Monitoring) and ID.RA (Risk Assessment) read the matrix as the asset-and-class evidence behind the programme claim of ongoing detection.20 |
The framework reads usually focus on whether detection happens at the cadence the framework expects, not on which tool runs. The matrix-level question (which classes are covered by which tool category on which asset type) feeds the framework-level question (whether the programme demonstrates ongoing detection of relevant classes). Programmes that maintain the matrix have the framework answer ready; programmes that do not maintain the matrix reconstruct the picture during audit week.
The procurement read of the matrix
The single largest procurement effect of the coverage matrix is that it changes the procurement question from "which tool in this category is best" to "which classes are silent in our stack today and which tool category closes them". Programmes that procure tools without the matrix tend to add one of each category and end up with significant secondary-coverage overlap and silent gaps that no tool addresses. Programmes that procure tools from the matrix start with the silent-gap classes.
- Procurement starts at the silent gaps in the matrix, not at the analyst-quadrant rankings of tools by category.
- Two tools in the same category usually do less for the silent-gap question than one tool in a category the stack does not yet have.
- Authenticated-DAST procurement is sometimes more useful than a second SAST because the gap pattern (authorisation chains across endpoints) is in the unauthenticated-DAST blind spot.
- Pentest and bug bounty are not optional add-ons; they are the only categories covering the human-judgement classes the matrix names as silent gaps in the automated stack.
- Tool consolidation should retire the secondary-coverage tool, not the primary-coverage tool. Programmes that consolidate by buying the cheapest tool in the category lose primary-coverage clarity.
- Procurement decisions get recorded against the matrix cell rather than against the tool catalogue, so a tool change is a matrix update rather than a documentation rewrite.
The procurement framing the matrix supports is covered in the vulnerability management platform RFP template and the security tool consolidation workflow. The matrix gives the consolidation conversation an artefact rather than a debate.
How the coverage matrix reads against the maturity model
Coverage-matrix discipline reads against the discovery dimension of the vulnerability management maturity model. Programmes at Level 2 (Repeatable) have a list of installed tools without a matrix. Programmes at Level 3 (Defined) have a matrix but only update it at procurement cycles. Programmes at Level 4 (Managed) update the matrix when the stack changes, surface the six silent-gap patterns explicitly, and route findings to canonical records per cell. Programmes at Level 5 (Optimising) tie the matrix to the asset inventory so a new asset type triggers a coverage decision rather than a coverage assumption.
The full grid is laid out in the vulnerability management maturity model research. The matrix is one of the load-bearing artefacts at the discovery-and-triage dimensions; the security debt research covers what happens when the silent gaps go uncovered for long enough that the inflow they were not blocking accumulates as residual risk on the four-class debt ledger.
How the engagement record carries the matrix
Coverage-matrix accuracy improves when the matrix lives next to the engagement record the operational work lives on, rather than on a static document that drifts from operational reality between procurement cycles. The platform does not maintain the coverage matrix on the programme behalf, but it does make the canonical-record-per-class rule operational rather than aspirational.
SecPortal records every finding against a versioned engagement record through findings management. The finding record captures which tool produced the finding, the CVSS 3.1 vector, the severity band, the affected asset, and the evidence trail. Findings from the platform-managed scanners and findings imported from external tools (pentest reports, bug bounty disclosures, third-party scanner output) live on the same record so the canonical-record rule per weakness class is applied at finding creation rather than reconstructed during audit.23
The platform supports code scanning via repository connections (GitHub, GitLab, Bitbucket, Azure DevOps OAuth) running Semgrep SAST and SCA, authenticated scanning with AES-256-GCM encrypted credential storage, and external scanning of the externally-reachable surface.24,25,26 The matrix columns for SAST, SCA, authenticated DAST, and external DAST are populated by the platform-managed scanners; the matrix columns for pentest, bug bounty, and disclosure are populated by external findings imported as findings on the same engagement record.
The activity log captures the timestamped chain of state changes by user, including the canonical-record assignment per finding, so the matrix-level decision is auditable rather than a verbal convention.27 Compliance tracking maps findings to ISO 27001, SOC 2, Cyber Essentials, PCI DSS, and NIST frameworks with CSV export, so the matrix-reads-against-frameworks question is one query against the same record.28
The wider workflow context lives on the scanner result triage workflow, the scanner to ticket handoff governance workflow, and the SDLC vulnerability handoff workflow. Each of those workflows assumes a canonical-record rule per finding, which is what the coverage matrix establishes at the catalogue level before findings start arriving at the queue.
For internal AppSec, vulnerability management, and security engineering teams
Internal teams carry the coverage-matrix discipline between procurement cycles. The pattern that survives tool rotation, vendor rebrand, and stack consolidation is to operate the matrix per asset type, mark the silent gaps explicitly, and tie procurement decisions to matrix cells rather than to category catalogues.
- Build the matrix per asset type (web app, API, mobile, container, IaC, identity surface) and review it when the stack changes.
- Group rows by CWE family rather than per-CWE so the matrix survives maintenance cycles.
- Name the six silent-gap patterns explicitly so the matrix surfaces what no automated tool reaches.
- Mark every cell with primary, secondary, out-of-scope, or silent-gap so unmarked cells do not become hidden gaps.
- Tie procurement to silent-gap closure rather than to category-catalogue completeness.
- Pair pentest, bug bounty, and disclosure as the categories that cover human-judgement classes; treat them as primary coverage for the classes they cover, not as supplementary scanning.
For AppSec teams, product security teams, vulnerability management teams, security engineering teams, and cloud security teams, the operating commitment is to keep the matrix accurate from the live record at any moment in the procurement cycle, not only at vendor renewal week. The matrix is the artefact that makes the scanner-stack discussion concrete; without it, every category conversation rebuilds the picture from scratch.
For security leadership, GRC, and audit committees
Security leaders, GRC owners, and audit committees read coverage overlap through a different lens than operational teams. The leadership read is whether the stack genuinely covers the weakness universe the programme claims to cover, not only whether the stack has tools in every category. A programme that reports nine tools and a list of categories without a matrix is reporting capability rather than coverage; a programme that reports the matrix with primary, secondary, out-of-scope, and silent-gap cells is reporting coverage.
- Track silent-gap cells as separate residual-risk lines rather than as footnotes in the tool list.
- Read tool consolidation decisions through the matrix; consolidation that retires a primary-coverage tool is a coverage regression, not a cost saving.
- Surface the six silent-gap patterns alongside the framework-mapped read so the audit committee sees what the stack does not cover next to what it does.
- Tie procurement budget to silent-gap closure with the matrix as the artefact that justifies the request.
- Read pentest and bug bounty as the categories that cover human-judgement classes; budgets that frame them as supplementary to scanning are usually under-resourcing the human-judgement-class coverage.
The leadership-side platform discipline that supports this is covered on SecPortal for CISOs and security leaders, SecPortal for security operations leaders, and SecPortal for GRC and compliance teams. The coverage matrix and the four-class debt ledger together give the audit committee the read of programme posture that tool lists alone do not.
The wider scaffolding the matrix sits inside is laid out in the vulnerability management maturity model research. Maintained-coverage-matrix discipline is the load-bearing distinction between Level 3 and Level 4 on the discovery-and-triage dimensions. Programmes that operate the matrix at Level 4 or Level 5 report coverage as a structured artefact; programmes at Level 2 or Level 3 report a tool list and rebuild the coverage picture every audit cycle.29
Conclusion
Security tool coverage overlap is a class-level question read against a per-asset-type matrix, not a tool-list question read against a category catalogue. The nine-category enterprise stack (SAST, SCA, DAST, container, IaC, secrets, ASM, pentest, bug bounty/disclosure) covers a structurally different slice of the weakness universe per category, and each category has known structural limits as well as known structural strengths. The six gap patterns (insecure design, authorisation chains, authenticated injection, race conditions, cryptographic protocol weaknesses, supply-chain weaknesses below the manifest) are the recurring silent-gap families that the matrix surfaces and that programmes without the matrix tend to leave uncovered.1,2,3,4,17,18
Treating coverage overlap as a property of the live engagement record rather than a static document is the highest-leverage discipline in the procurement-and-operations conversation. It changes the procurement question from "which tool in this category is best" to "which classes are silent in our stack today and which tool category closes them". It keeps the leadership read and the operational read on the same record. The platform a programme uses does not have to maintain the matrix; it does have to make the canonical-record-per-class rule operational rather than aspirational.
Frequently Asked Questions
Sources
- MITRE, Common Weakness Enumeration (CWE)
- MITRE, CWE Top 25 Most Dangerous Software Weaknesses
- OWASP, Top 10 Web Application Security Risks
- OWASP, Application Security Verification Standard (ASVS)
- OWASP, Software Assurance Maturity Model (SAMM)
- OWASP, Mobile Application Security Verification Standard (MASVS)
- OWASP, API Security Top 10
- NIST, SP 800-218 Secure Software Development Framework (SSDF) v1.1
- NIST, SP 800-53 Revision 5 RA-5 Vulnerability Monitoring and Scanning
- NIST, SP 800-115 Technical Guide to Information Security Testing and Assessment
- CISA, Secure by Design Pledge
- CISA, Known Exploited Vulnerabilities Catalog
- PCI Security Standards Council, PCI DSS v4.0 Requirement 6 and Requirement 11
- ISO/IEC, ISO 27001:2022 Annex A 8.8 Management of Technical Vulnerabilities
- AICPA, SOC 2 Trust Services Criteria CC7.1 Detection of Vulnerabilities
- BSIMM, Building Security In Maturity Model
- OWASP, Source Code Analysis Tools (SAST guidance)
- OWASP, Vulnerability Scanning Tools (DAST guidance)
- CycloneDX, Software Bill of Materials specification
- NIST, Cybersecurity Framework (CSF) 2.0
- OWASP, Vulnerability Management Guide
- NCSC, Vulnerability Management Guidance
- SecPortal, Findings & Vulnerability Management
- SecPortal, Code Scanning
- SecPortal, Authenticated Scanning
- SecPortal, External Scanning
- SecPortal, Activity Log & Workspace Audit Trail
- SecPortal, Compliance Tracking
- SecPortal Research, Vulnerability Management Maturity Model
- SecPortal Research, Security Debt Economics
Run the canonical-record rule on the live engagement record
SecPortal keeps findings, tool-of-origin, CVSS vector, severity, owner, evidence, retest history, and compliance mappings on one versioned engagement record so the coverage matrix is operational rather than aspirational and the canonical-record-per-class rule is applied at finding creation.