Technical16 min read

Common Weakness Enumeration (CWE) Explained: A Practical Guide

The Common Weakness Enumeration is a community-driven catalog of software and hardware weaknesses maintained by MITRE. CWE answers a different question from CVE, CVSS, EPSS, or KEV: not which specific vulnerability exists, not how severe a single instance is, and not how likely it is to be exploited, but what class of mistake produced the weakness in the first place. For internal AppSec teams, product security teams, and vulnerability management teams, CWE is the classification layer that lets a programme reason about patterns rather than instances. This guide explains what CWE is, how to read an entry, the CWE Top 25, where CWE sits next to CVE and CVSS in the operating queue, how AppSec uses CWE in code review, how compliance frameworks reference it, and the common failure modes when teams treat CWE as a severity score.

What CWE Actually Is

CWE is a hierarchical, community-curated dictionary of software and hardware weaknesses maintained by MITRE under sponsorship from CISA. The catalog has grown past nine hundred entries and is published with a permissive licence at the public CWE site under stable identifiers of the form CWE-XX or CWE-XXX. Each entry describes a class of mistake (a weakness) that, if present in code or configuration, can be exploited by an attacker. The catalog is updated on a versioned cadence; CWE 4.x is the current branch at the time of writing and the version number is part of any defensible mapping in evidence.

The catalog is structured as a graph rather than a flat list. Entries are organised into views (such as the Research view, the Software Development view, and the Hardware Design view), into categories (which group related entries thematically), and into pillars and base weaknesses (which capture the abstract class versus the concrete instance). Relationships between entries are typed (ChildOf, ParentOf, PeerOf, CanPrecede) so a tool or analyst can move from an abstract pillar like CWE-693 (Protection Mechanism Failure) down to a concrete base weakness like CWE-352 (Cross-Site Request Forgery). The graph is the reason CWE works as a classification layer: a finding can carry the most specific CWE the analyst can justify, and any aggregation tool can roll the data up to the parent abstraction without losing fidelity.

CWE is not a vulnerability list. It does not record specific incidents in specific products. It does not record severity. It does not record exploitation likelihood. It records the classes of mistakes that could produce a vulnerability if present. That distinction is the entire reason CWE exists as a separate dictionary from CVE, and treating it as another flavour of vulnerability identifier is the most common way programmes misuse it.

CWE vs CVE vs CVSS vs EPSS

The four identifiers that show up on every modern finding answer four different questions. CWE answers what kind of weakness this is. CVE answers which specific public vulnerability this is. CVSS answers how severe a successful exploit would be. EPSS answers how likely exploitation is to occur in the next thirty days. They are not interchangeable, and the audit conversation goes off the rails when a programme tries to read any one of them as a substitute for the others.

CWE: The Class of Mistake

CWE is a stable identifier for a weakness class. CWE-79 means Cross-Site Scripting (Improper Neutralization of Input During Web Page Generation), regardless of which application contains the instance. CWE-22 means Path Traversal regardless of which file system the traversal lands on. CWE identifiers travel with the finding, the test case, the secure-coding guidance, and the policy. They are also the bridge between scanner output and engineering training: SAST tools report CWE identifiers, secure-coding curricula are organised by CWE, and so the engineer reading a SAST finding can trace the path to the exact training module that covers the class of mistake.

CVE: The Specific Public Vulnerability

CVE is a stable identifier for one specific vulnerability that has been disclosed publicly under the MITRE CVE programme, assigned by a CNA (CVE Numbering Authority) to a specific product version or versions. CVE-2024-XXXXX is a single record. A CVE record almost always carries a CWE on it (as the class of weakness this CVE is an instance of) plus a description, affected products, references, and often a CVSS score. Custom application findings produced during a scan or pentest typically do not have a CVE because they are not public-product vulnerabilities. They still have a CWE, because the weakness class is real even when no public identifier has been assigned.

CVSS: The Severity Vector

CVSS measures the technical severity of a single instance. CVSS 3.1 decomposes attack vector, attack complexity, privileges required, user interaction, scope, and the confidentiality, integrity, and availability impact of a successful exploit, plus the optional Temporal and Environmental groups that re-weight the Base score for the operating context. Two findings with the same CWE can carry very different CVSS vectors because severity depends on context (network exposure, asset criticality, authentication required) that the CWE class itself does not capture. Our CVSS scoring explained guide covers the severity decomposition in full.

EPSS: The Likelihood Estimate

EPSS is a public, FIRST-maintained model that estimates the probability of a CVE being exploited in the wild over the next thirty days. EPSS is computed per-CVE rather than per-CWE, because the model trains on exploitation telemetry tied to specific public vulnerabilities. A custom finding that has no CVE has no EPSS value, but it still has a CWE class. Our EPSS score explained guide covers the likelihood axis and how it interacts with the CISA KEV catalog.

Read together, the four identifiers describe the finding fully. CWE classifies the mistake, CVE pins the specific public vulnerability if one exists, CVSS measures the severity, and EPSS or KEV captures the likelihood. Programmes that record only CVE and CVSS lose the cross-finding pattern view that CWE provides; programmes that record only CWE lose the severity and likelihood lenses that drive prioritisation. The defensible record carries all four.

The CWE Top 25 and How to Use It

MITRE publishes an annual ranking of the most dangerous software weaknesses, the CWE Top 25 Most Dangerous Software Weaknesses. The list is derived from the public CVE corpus during the trailing data window, with each CVE mapped to its CWE and the resulting frequency and severity values combined into a scoring formula MITRE publishes alongside the list. The 2023 edition kept Out-of-Bounds Write (CWE-787) at the top, with Cross-Site Scripting (CWE-79), SQL Injection (CWE-89), Use After Free (CWE-416), and OS Command Injection (CWE-78) rounding out the top five. The list shifts year over year as the CVE corpus shifts, but the top quartile is stable enough across recent editions that an enterprise programme can use it as a planning anchor.

The Top 25 is not a policy. It is a population-level signal about which weakness classes drove the most damaging public CVEs in the trailing window. Programmes use it for three planning conversations: the secure-coding training curriculum (which classes deserve the most engineering attention), the scanner coverage check (which classes the SAST/SCA/DAST stack should detect reliably), and the AppSec maturity argument (which classes the secure-by-default platform patterns should make impossible by construction). None of those conversations work if the Top 25 is read as a per-finding severity signal. A specific CWE-79 instance with no observed exploitation, behind authentication, on an internal admin tool may sit on a far longer SLA than a CWE-22 path traversal on an internet-facing service even though XSS sits higher on the Top 25.

For internal teams, the practical use is: anchor the secure-coding training curriculum and the scanner coverage matrix to the Top 25, then run per-finding prioritisation on CVSS plus EPSS plus KEV plus asset criticality. The Top 25 shapes the programme; CVSS and EPSS shape the queue. Mixing the two layers is the most common Top 25 misuse, and it is the one auditors notice fastest because the queue stops matching the policy.

How to Read a CWE Entry

A CWE entry is structured. Reading a few entries cold turns the catalog from a wall of identifiers into an actually usable reference. The fields below are the ones an operator or developer reads first.

Description and Extended Description

The Description is the one-paragraph definition of the weakness in clean prose. The Extended Description fills in scope, common contexts, and the boundary cases. Together they answer the question: what mistake does this entry actually describe, and where is the line between this entry and its peers? When two CWEs look like they could both apply, the Extended Description is usually where the disambiguation lives.

Relationships

The Relationships section lists the typed connections to other entries (ChildOf, ParentOf, PeerOf, CanPrecede). These are the navigation surface for the catalog graph. When a finding feels like it belongs to multiple weakness classes, the Relationships section is where you find the abstract parent that covers both. Programmes that need to roll up findings for a leadership read often map to the parent (Pillar or Class) rather than the most specific Base weakness, because the parent aggregates without losing meaning.

Common Consequences

The Common Consequences section names the categories of impact (Confidentiality, Integrity, Availability, Authentication, Authorization, Accountability, Access Control, Other) and what each category typically looks like for this weakness. This is the field that makes the case for the CVSS Impact metrics in a defensible way, because the consequences are documented at the class level rather than improvised at the finding level.

Demonstrative Examples

The Demonstrative Examples section shows code fragments (vulnerable plus fixed) in one or more languages. These are the engineering-grade artefacts the CWE catalog gives back to developers, and they are the reason CWE links inside SAST findings work as a training surface. If a developer reads only one part of a CWE entry, this is usually it.

Potential Mitigations

The Potential Mitigations section names the engineering practices that prevent the weakness, often grouped by phase (Architecture and Design, Implementation, Operation). This is the source the AppSec function pulls from when adding remediation guidance to a finding template, and the source the secure-coding curriculum pulls from when designing a module on the weakness.

Observed Examples

The Observed Examples section links to specific CVEs that have been mapped to this CWE. This is the evidence that the weakness class shows up in real software, and the link path back to the per-CVE view (severity, exploitation evidence, affected products) for any specific instance.

CWE in AppSec and Code Review

CWE is the lingua franca of AppSec tooling. Every modern SAST engine emits CWE identifiers on findings, most SCA tools carry the CWE on the underlying CVE, and modern DAST and IAST output is increasingly CWE-tagged. The reason matters: a SAST finding tagged with CWE-89 (SQL Injection) gives the engineer a direct path to the secure-coding training module, the OWASP guidance, the policy on how to handle the class, and any historical findings of the same class in the same codebase. Without the CWE tag, every SAST finding lands as a wall of detector-specific text.

For an AppSec team running code review across multiple engineering teams, the CWE distribution across the finding inventory is the most useful posture metric the SAST stack produces. A spike in CWE-89 instances in one service signals a missing parameterised-query pattern at the architecture layer. A spike in CWE-78 (OS Command Injection) signals an unsafe shell-out pattern that needs a platform-level wrapper. A spike in CWE-22 (Path Traversal) signals a missing canonicalisation helper. The pattern only shows up when CWE is recorded on every finding and the dashboard rolls up by class.

For internal engineering, the secure-by-default platform pattern is the durable fix. The right outcome of seeing the same CWE class repeat across services is to ship a platform helper that makes the unsafe pattern impossible (a parameterised-query wrapper, a canonicalising file path API, a templated rendering API) rather than chasing instances. SecPortal supports this pattern with findings management that records CVSS 3.1 vectors and CWE-mapped finding templates, and code scanning via repository connections that ingest SAST/SCA output (Semgrep) so the CWE distribution is visible against the same operating record AppSec runs the rest of the programme on. Pair the workflow with the secure code review checklist and the security champions program guide for the engineering-side discipline that closes CWE patterns at the source.

CWE in Vulnerability Management

For the vulnerability management function, CWE is the cross-finding lens. The day-to-day queue runs on CVSS, EPSS, KEV, asset criticality, and exposure. The weekly and monthly reads run on CWE distribution, because that view is what surfaces the systemic patterns the per-finding queue cannot. A queue that closes thirty findings a week is healthy or unhealthy depending on whether those thirty are spread across many CWE classes (broad maintenance) or clustered into one CWE class (a single root cause that needs an upstream fix).

The operating record matters. Findings imported from external scanners, from authenticated internal scans, from code scans, and from pentest engagements all need to carry a CWE field, all need to normalise to the same CWE catalog version, and all need to roll up to the same dashboard. If different sources report against different CWE versions or different mapping conventions, the cross-finding view stops being trustworthy. A platform that records CWE alongside CVE and CVSS on every finding (whether the finding originated from external scanning, authenticated scanning, or code scanning) is the prerequisite for the CWE-distribution view to mean anything. Pair the discipline with the vulnerability prioritisation workflow and the vulnerability backlog management workflow so the CWE patterns feed the queue-shaping decisions and the SLA calibration.

For scanner triage specifically, CWE plus CVSS plus evidence quality is the standard validation triple. A scanner finding without a CWE on the record cannot be triaged consistently across analysts because two analysts reading the same finding may classify it differently. A scanner finding with a clear CWE, a clear CVSS vector, and clear evidence is reproducible. Our scanner result triage workflow covers the validation discipline this depends on, and the false positives reference covers the inverse case where a CWE-tagged scanner finding turns out not to be exploitable in the specific runtime context.

Where CWE Maps to Compliance Frameworks

None of the major compliance frameworks make CWE itself a control, but several of them either expect CWE-aware tooling or reference the CWE catalog directly. Make the mapping explicit in the policy and CWE earns its place in the evidence pack.

OWASP ASVS and SAMM

OWASP ASVS organises requirements that map directly to CWE classes (input validation, output encoding, authentication, session management). SAMM uses CWE distribution as one of the inputs into the Verification and Implementation maturity practices. Our ASVS framework page and SAMM framework page cover the wider context, and the OWASP framework page covers the broader Top 10 alignment.

NIST SP 800-53 RA-5 and SI-2

RA-5 (Vulnerability Monitoring and Scanning) expects CWE-aware tools and CWE-tagged scanner output as part of the documented vulnerability identification process. SI-2 (Flaw Remediation) ties the remediation timeline to the severity and class of the weakness. Our NIST SP 800-53 framework page covers the catalog of relevant controls.

NIST SSDF (SP 800-218)

The SSDF practice PW.7 (Review and Analyze Human-Readable Code) and PW.8 (Test Executable Code) expect classification of weaknesses found, which CWE supplies. The CISA Secure Software Development Attestation references SSDF directly, so a CWE-tagged finding inventory is part of the evidence pack. Our NIST SSDF implementation guide walks the practice mapping in full.

CISA Secure by Design

The CISA Secure by Design pledge includes a goal on CVE completeness, and CISA specifies that complete CVE records must carry a CWE field. Manufacturer programmes operating under the SbD pledge use CWE distribution as one of the public-facing maturity signals. Our CISA Secure by Design framework page covers the principles and the pledge mechanics.

PCI DSS v4.0 Requirement 6.2

Requirement 6.2.4 expects bespoke and custom software to be reviewed for common vulnerabilities, and PCI explicitly references the OWASP Top 10 plus CWE/SANS Top 25 as candidate references. Our PCI DSS framework page covers the assessment context.

ISO 27001 Annex A 8.28 and 8.29

A 8.28 (Secure coding) and A 8.29 (Security testing in development and acceptance) expect a documented secure-coding practice and a documented testing practice. CWE-mapped findings are the operating evidence both controls produce. Our ISO 27001 framework page covers the wider control set.

Common CWE Failure Modes

Treating CWE as a Severity Score

Programmes that read CWE-79 as inherently more severe than CWE-22 because XSS sits higher on the Top 25 lose every per-finding context that drives real severity. The fix is to keep CWE as the class layer and CVSS as the severity layer, and to write the policy so the two layers do not get conflated.

Mapping at the Wrong Abstraction Level

Programmes that map every finding to a Pillar (CWE-693, CWE-707) lose the per-class fidelity that makes the cross-finding view useful. Programmes that try to map every finding to the most specific Variant lose the rollup the leadership read needs. The fix is to map at the Base level by default, with Pillar mapping reserved for the executive-summary view.

Mixed CWE Catalog Versions Across Sources

Programmes that ingest one scanner against CWE 4.10 and another against CWE 4.13 produce a cross-source view that drifts at every entry that was modified between versions. The fix is to pin the catalog version on the operating record and to upgrade in coordinated steps with a delta review, rather than letting each source pull independently.

Accepting Detector-Specific Identifiers Instead of CWE

Programmes that record a SAST finding under the detector-specific rule identifier (without the CWE) lose the cross-tool view at the import step. The fix is to require the CWE field on every imported finding, and to refuse imports that arrive without it. Detector-specific identifiers can stay as metadata; CWE is the canonical class.

Using the Top 25 as a Per-Finding SLA

Programmes that attach an SLA to a CWE class purely because it sits in the Top 25 produce queues that ignore exposure, asset criticality, and exploitation evidence. The fix is to keep the Top 25 as the population-level planning anchor and to run the per-finding SLA on CVSS plus EPSS plus KEV plus context. Pair the SLA discipline with our vulnerability SLA management workflow so the per-finding window is auditable.

Custom Findings Without Any CWE

Pentest and manual-review findings that land in the inventory without a CWE break every aggregation that reads CWE for rollup. The fix is to require a CWE on every finding at creation time, even when the analyst is unsure of the most specific Base; mapping to a Class entry with a comment is better than leaving the field empty.

CWE in Pentest and Vulnerability Documentation

Pentest reports, vulnerability writeups, and remediation playbooks reference CWE alongside CVSS for the same reason scanner output does: a stable class identifier travels across teams and across engagements. Our vulnerability encyclopedia carries CWE plus CVSS plus a CVSS vector and remediation guidance for each entry, organised so the engineer reading the writeup can move from class to instance to fix in a single page.

Examples worth scanning to see the pattern in operation: SQL injection (CWE-89), the canonical example of CWE-tagged remediation guidance; cross-site scripting (CWE-79), the most common AppSec finding by volume; and server-side request forgery (CWE-918), the cloud-native escalation class. Each entry sits inside the broader vulnerability encyclopedia so the CWE class is browsable next to the OWASP Top 10 framing covered in our OWASP Top 10 explained guide.

The same discipline applies to manual pentest findings. A pentester writing up an authenticated finding that does not match a public CVE still tags the writeup with a CWE so the AppSec function can roll up the engagement findings against the rest of the inventory. Our pentest report writing guide covers the field-level discipline this rollup depends on.

Where CWE Sits in the Wider Internal Programme

CWE is one classification layer inside a wider internal security organisation. It sits next to the engineering-side AppSec function, the daily operational discipline of the VM team, the GRC owner's evidence cadence, and the leadership reporting cadence the CISO produces. Different audiences read the same CWE-tagged data differently.

For the AppSec function that owns the secure-coding curriculum and the SAST stack, SecPortal for AppSec teams covers how CWE distribution feeds the architecture conversation. For the product security function that owns the per-release security posture, SecPortal for product security teams covers how CWE patterns feed the platform-pattern decisions. For the vulnerability management function that runs the find-track-fix-verify queue, SecPortal for vulnerability management teams covers the per-finding lifecycle. For the CISO sponsoring the programme, SecPortal for CISOs covers how CWE-distribution outcomes roll into leadership reporting. For the GRC owner translating CWE state into evidence, SecPortal for GRC and compliance teams covers the audit-side discipline.

Pair the CWE programme with adjacent enterprise reading. The vulnerability prioritisation framework guide covers the multi-signal prioritisation theory CWE plugs into. The SAST vs SCA code scanning guide covers the CWE-emitting tooling. The threat modelling guide covers the upstream design step where CWE classes are anticipated rather than discovered. The SBOM guide and the VEX guide cover the components-and-exploitability layer that CWE-classified CVEs flow through.

Capturing Defensible CWE Audit Evidence

The audit conversation about CWE reduces to a small evidence set. Build the set as a side effect of doing the work, and the audit collapses into a query rather than a multi-team scramble.

The minimum evidence set has six artefacts. The first is the CWE field on every finding in the operating inventory, mapped at the Base level by default. The second is the dated record of the CWE catalog version pinned to the programme, so a delta review accompanies any version upgrade. The third is the timestamped lifecycle of each finding (detected, prioritised, assigned, remediated, retested, closed) with the named user who performed each transition, so the per-finding work history is reconstructible. The fourth is the CWE distribution rollup over the audit window, which lets an auditor read the programme posture in a single chart rather than a per-finding scroll. The fifth is the framework mapping (OWASP ASVS or SAMM, NIST SSDF, NIST SP 800-53, ISO 27001, PCI DSS) so the evidence pack is portable across audits. The sixth is the secure-coding training record for the CWE classes that drive the most volume, so the engineering-side response is visible.

SecPortal's findings management feature tracks each finding with a CVSS 3.1 vector, owner, evidence, and remediation status, and supports structured fields and tags so the per-finding CWE identifier can be carried alongside the severity vector. The activity log keeps the timestamped chain of state changes by user across findings, engagements, scans, documents, comments, and team changes, with plan retention of 30, 90, or 365 days. The compliance tracking feature maps findings and controls to ISO 27001, SOC 2, Cyber Essentials, PCI DSS, and NIST and exports the evidence pack as CSV. None of those features assign CWE identifiers automatically; the CWE mapping decision is yours to make at finding creation time. What the platform provides is one record on which the CWE class, the severity vector, the owner, the lifecycle, and the framework mapping all live so the audit query reads from the same source the operator runs from.

Run a CWE-Aware Programme on a Single Record

CWE is mostly a recordkeeping problem in disguise. The catalog is public, the mapping discipline is simple, and the framework integration is well-documented. What stops most programmes from getting clean CWE evidence is that the per-finding CWE values, the lifecycle audit trail, the cross-finding distribution view, the framework mapping, and the leadership read all sit on different records, so producing the evidence pack means reconciling four or five sources at audit time. SecPortal is built around a single engagement record: findings management with CVSS 3.1 calibration and structured fields for the CWE identifier, the activity log for the timestamped chain of state changes across findings, engagements, scans, and team changes, compliance tracking with ISO 27001 / SOC 2 / Cyber Essentials / PCI DSS / NIST mappings and CSV export, code scanning via repository connections that ingest CWE-tagged SAST/SCA output, and AI-powered report generation when leadership wants the executive summary.

None of these features assign CWE for you: the mapping is yours to make at finding creation time. What the platform does is keep the CWE value, the lifecycle, the evidence, the framework mapping, and the cross-finding distribution view on the same record so the audit conversation collapses into a query rather than a multi-team scramble.

Run CWE-tagged vulnerability management on SecPortal

Stand up the engagement record in under two minutes. Free plan available, no credit card required.