Technical15 min read

EPSS Score Explained: How to Use It for Vulnerability Prioritisation

The Exploit Prediction Scoring System (EPSS) is the third leg of modern vulnerability prioritisation, alongside CVSS for technical severity and the CISA KEV catalog for observed exploitation. EPSS answers a different question from either of those: how likely is a given vulnerability to be exploited in the wild over the next thirty days. For internal vulnerability management teams, AppSec functions, and GRC owners, EPSS is the signal that turns a flat queue of high-severity findings into a calibrated work list. This guide explains what EPSS is, how to read the probability and percentile values correctly, where it differs from CVSS and KEV, what threshold to set, how to ingest the daily feed, and how to wire EPSS into a defensible audit-evidence trail.

What EPSS Actually Is

EPSS is a public, machine-learning-driven model maintained by the EPSS Special Interest Group at FIRST (the Forum of Incident Response and Security Teams). The model produces, for almost every CVE in the public ecosystem, an estimate of the probability that the vulnerability will be exploited in the wild during the next thirty days. Estimates are refreshed daily. The output is published as a CSV feed under the public EPSS data URL with a permissive licence and no authentication, plus a JSON API for live lookups.

The model itself is logistic regression (since EPSS v3, released in March 2023) over more than a thousand features that include vulnerability metadata (CVSS components, CWE category, vendor, product), publication timing, public exploit availability (Exploit-DB, Metasploit, GitHub PoC presence), discussion volume, and observed exploitation telemetry from sensor partners. The features are weighted, the model produces a probability between zero and one, and the system additionally publishes the percentile rank of each vulnerability inside the daily-refreshed distribution. EPSS does not look at your environment; it looks at the population of vulnerabilities and the population of exploitation evidence the SIG can observe.

EPSS is not a severity score. It is not a measurement of how bad the vulnerability is if exploited. It is an estimate of how likely exploitation is to occur. That distinction is the entire reason EPSS exists as a separate signal from CVSS, and treating it as a different flavour of severity is the most common way programmes misuse it.

Probability vs Percentile: The Two Numbers

Every EPSS row carries two values per CVE. The probability is a number between zero and one (often displayed as a percentage). It is the raw model output and is calibrated against observed exploitation data: a probability of 0.05 means the model expects roughly a five percent chance of exploitation in the next thirty days for that CVE. The percentile is the rank of that probability inside the daily-refreshed distribution of all CVEs. A percentile of 0.95 means the CVE sits in the top five percent of the distribution by predicted exploitation likelihood; 0.10 means it sits in the bottom ten percent.

Both numbers serve a purpose, but they answer different questions. The probability is comparable across time: a CVE with a probability of 0.20 today represents the same model-estimated likelihood a CVE at 0.20 represented six months ago. The percentile is comparable across the population on a given day: a CVE in the 95th percentile today is in the top tail of today's distribution. Programmes that want a stable internal threshold should anchor to the probability, because the percentile drifts as the CVE population grows and as model recalibrations land. Programmes that want a queue-shaping signal that always surfaces the top tail of the population can anchor to the percentile, because by definition the top one or five percent always returns approximately one or five percent of the catalog.

Most enterprise policies anchor the EPSS uplift to the percentile (typically 90 or 95) because the queue-management semantics are easier to explain to leadership and easier to audit. But the probability value matters when you are reading individual decisions, especially when the CVE sits just below or just above the threshold and the policy needs an unambiguous answer.

EPSS vs CVSS: Severity vs Likelihood

CVSS measures intrinsic technical severity. The Base score decomposes attack vector, attack complexity, privileges required, user interaction, scope, and the confidentiality, integrity, and availability impact of a successful exploit. CVSS 3.1 is what most enterprise scanners emit by default and what most compliance frameworks reference. CVSS does not estimate whether the vulnerability is being exploited or is likely to be. A CVSS Critical can sit untouched in a research codebase for years and never see real attack activity. A CVSS Medium can be the actual entry point in an active campaign.

EPSS sits on the orthogonal axis: likelihood of exploitation. The two values multiply, in operating terms. A high CVSS plus a high EPSS produces an urgent action. A high CVSS plus a very low EPSS often sits at the standard severity SLA without uplift. A low CVSS plus a high EPSS deserves a closer look, because the model is signalling that exploitation is likely even when the impact is bounded. Pure-CVSS programmes work the wrong findings first because every Critical and High looks equally urgent. Pure-EPSS programmes underweight the catastrophic findings that have not crossed the model's evidence threshold yet. The defensible approach reads both signals at the same time.

Our CVSS scoring explained guide covers the severity decomposition, including the Base, Temporal, and Environmental groups in CVSS 3.1 and how the Environmental metrics let you re-weight a finding for asset criticality without touching the underlying severity. EPSS is the natural complement on the likelihood axis. The third axis is classification: our CWE explained guide covers the weakness-class layer that travels with the finding alongside CVE, CVSS, and EPSS.

EPSS vs KEV: Prediction vs Observation

The CISA Known Exploited Vulnerabilities catalog is a curated list of vulnerabilities for which CISA has reliable evidence of in-the-wild exploitation. KEV is binary (a CVE is either on the list or not) and is conservative by design. Each KEV entry has been observed in real attack activity and has a documented remediation action. The catalog is small relative to the full CVE space, typically counted in the low thousands rather than hundreds of thousands.

EPSS is forward-looking. It estimates the probability of exploitation in the next thirty days based on features that include exploitation telemetry, but it is making a prediction rather than recording a fact. The two signals can disagree in both directions. A KEV-listed CVE may have a moderate EPSS probability if exploitation has slowed since the catalog entry was created. A non-KEV CVE may carry an EPSS percentile in the high nineties because the model sees strong predictive signal that exploitation is imminent even though CISA's evidence threshold has not been crossed. Treat KEV as a hard tier-up and EPSS as the next layer of the prioritisation, not as a substitute for KEV.

For the operating policy, the two-line rule that holds for most enterprise programmes is: KEV-listed findings always tier up to the elevated SLA, and non-KEV findings with EPSS percentile above 90 also tier up. Findings with EPSS percentile below 10 stay on the standard severity SLA unless asset criticality, exposure, or compensating-control state overrides. Our CISA KEV catalog guide covers the KEV-side discipline; this page covers the EPSS-side mechanics.

Choosing a Defensible EPSS Threshold

EPSS does not come with a built-in policy threshold. The policy choice is yours, and a calibration that makes sense for one programme will be wrong for another depending on remediation throughput, asset criticality distribution, and the cost of an over-tiered queue. The point of the threshold is to partition the EPSS axis into bands the policy can act on, not to identify a single magic number that always wins.

Default Two-Band Policy

EPSS percentile above 90: tier up to the elevated SLA matching Critical/High severity. EPSS percentile below 10: standard severity SLA only, with no EPSS-driven uplift. EPSS percentile between 10 and 90: standard severity SLA, but the EPSS value is recorded on the finding for trend awareness and for the calibration review every quarter. This is the simplest defensible policy and the one most programmes start with.

Three-Band Policy for High-Volume Estates

EPSS percentile above 95 plus internet exposure: same-week SLA. EPSS percentile 90 to 95: elevated SLA matching High severity. EPSS percentile 50 to 90: track but do not uplift. EPSS percentile below 50: severity-only SLA. Three bands give a programme with a large estate room to differentiate the top 5 percent from the next 5 percent without flattening the queue.

Probability-Anchored Policy for Audit-Heavy Programmes

EPSS probability above 0.20 (20 percent estimated likelihood): tier up. EPSS probability below 0.05 (5 percent estimated likelihood): no uplift. Anchoring to probability rather than percentile makes the threshold readable across time and across model recalibrations. The trade-off is that the queue size shifts with the population: the count of probability-above-0.20 findings can grow or shrink materially when EPSS releases a model update.

Calibrate Against Observed Throughput

Do not pick the threshold from a slide deck. Pull a quarter of historic findings, apply each candidate threshold, and read the queue size and the SLA-breach rate that would have resulted. The right threshold is the one your remediation throughput can clear inside the policy window without producing chronic exception backlog. Our remediation throughput research covers the closure-rate discipline this calibration depends on.

Ingesting the EPSS Feed Into Your Programme

FIRST publishes the daily EPSS dataset as a gzipped CSV at a stable URL plus a JSON API for per-CVE lookups. Most enterprise programmes pull the CSV nightly, decompress it, and reconcile against the finding inventory in a scheduled job. The full feed is small enough (a few tens of megabytes) that a daily full-refresh is faster than incremental sync logic for almost every programme. The API is useful when you need a fresh value for one specific CVE, not as a substitute for the bulk feed.

The reconciliation step matters more than the pull. Every finding with a CVE identifier should carry the latest EPSS probability and percentile on the operating record, and the values should be refreshed on the same cadence as the feed. A finding tagged with last week's EPSS percentile leads the queue astray inside a week, because EPSS values can move materially when public PoCs land or when sensor partners observe a new exploitation campaign. The freshness of the value is part of the prioritisation quality, not a side detail. The threat intelligence driven prioritisation workflow covers the broader pipeline that ingests EPSS shifts alongside KEV additions, CERT advisories, vendor PSIRT bulletins, and ISAC alerts as structured signals on a single engagement record with provenance, fitness assessment, and decision register. When an EPSS spike (a percentile that crosses the policy threshold mid-week because a public PoC dropped or a campaign was observed) triggers an out-of-cycle response on the affected estate, run the work as a structured engagement using the zero-day and emergency vulnerability response workflow so the exposure assessment, the named-owner assignment, the verified closure, and the leadership briefing read off one record rather than across email threads and a war-room channel.

Watch the same edge cases the KEV ingestion has to handle. Findings without a CVE identifier (custom application findings, configuration findings, weak header findings) cannot be matched against EPSS directly. Findings with multiple CVEs should carry the highest EPSS value among the listed CVEs, with both the value and the source CVE captured. Findings on retired CVEs (rare but possible after CVE rejection or merge events) need a translation layer rather than a silent zero. Address these in the ingestion pipeline rather than at the prioritisation step, so the per-finding EPSS field is reliable when the queue manager and the audit evidence both read it.

EPSS-Aligned Remediation SLAs

EPSS uplift modifies the SLA, it does not replace it. The base policy still runs on severity, asset tier, and exposure. EPSS layers on top. The mechanics that work for most enterprise programmes look like this.

Standard SLA (No EPSS Uplift)

Severity-driven SLA, asset-tier-aware, exposure-aware. Critical findings on internet-facing assets run on a tighter window than internal-facing Mediums. The standard SLA is the policy floor every finding inherits before any signal-driven adjustment.

EPSS Uplift Tier

Findings with EPSS percentile at or above the policy threshold (typically 90 or 95) move up one severity tier for SLA purposes. A Medium finding with EPSS percentile 96 runs on the High SLA. A High finding with EPSS percentile 97 runs on the Critical SLA. The intent of the uplift is to surface the elevated likelihood without doubling the queue size.

KEV Tier (Always Wins)

KEV-listed findings run on a fixed elevated SLA regardless of EPSS percentile. The KEV catalog is a stronger signal than EPSS for one reason: KEV records observed exploitation, EPSS estimates future exploitation. When KEV and EPSS disagree, KEV wins. Findings that are both KEV-listed and EPSS percentile above 95 do not need a third tier; the SLA is already at the policy floor.

Asset Tier and Exposure as Multipliers

Tier-zero or tier-one assets (production-critical, regulated, customer-facing) tighten the SLA on top of the EPSS uplift. Internet exposure tightens it again. Compensating-control state can relax it inside the exception process. Pair the SLA with the vulnerability SLA management workflow so the breach signal is visible on the same record the operator runs the work on, and use the remediation SLA calculator to model the impact of each policy choice before committing it to writing.

EPSS in the Exception and Risk Acceptance Process

A finding that hits the EPSS uplift threshold but cannot be remediated inside the elevated SLA goes through the exception process the same way any other finding does. The EPSS value itself becomes part of the exception decision, not a substitute for it. Auditors reading the exception register will look for the residual risk position that takes EPSS into account, not just the original CVSS severity.

A working pattern is to require the exception submission to record the EPSS probability and percentile at the time of the decision, the compensating control rationale, and the named risk owner. The re-review trigger then includes a refresh against the current EPSS value: if the percentile has moved materially upward since the original decision (a common case when public PoC code lands or a campaign is observed), the exception comes back to the security approver early rather than at the calendar expiry. This keeps the exception register honest as exploitation likelihood evolves.

For internal security and GRC teams, our vulnerability acceptance and exception management workflow covers the per-decision and org-wide ledger discipline. The risk acceptance form template and the security exception register template are the per-decision and ledger artefacts the workflow produces. EPSS-driven re-reviews tend to be shorter than calendar-driven re-reviews, because the underlying signal can shift inside a single release cycle.

Capturing Defensible EPSS Audit Evidence

The audit conversation about EPSS reduces to a small evidence set. Build the set as a side effect of doing the work, and the audit collapses into a query rather than a multi-team scramble.

The minimum evidence for an EPSS-aligned programme has six artefacts. The first is the per-finding EPSS probability and percentile on the live record, refreshed daily, so an auditor can filter findings by EPSS band. The second is the dated record of the policy threshold (probability or percentile) in force at any given time, so changes to the threshold are traceable rather than silent. The third is the timestamped lifecycle of every finding that carried an EPSS uplift (detected, prioritised, assigned, remediated, retested, closed) with the named user who performed each transition. The fourth is the exception register entry for any EPSS-uplifted finding that missed the elevated SLA, with the EPSS value at decision time, the compensating control, the residual risk, the named approver, and the EPSS-refresh re-review trigger. The fifth is the framework mapping (EPSS sits inside the implementation of NIST SP 800-53 RA-5, ISO 27001 Annex A 8.8, PCI DSS Requirement 6.3, SOC 2 CC7.1) so the evidence pack is portable across audits. The sixth is the EPSS reconciliation cadence record showing that the feed has been pulled and the operating record updated on the agreed schedule.

SecPortal's findings management feature tracks each finding with a CVSS 3.1 vector, owner, evidence, and remediation status, and supports structured fields and tags so the per-finding EPSS probability and percentile can be carried alongside the severity vector. The activity log keeps the timestamped chain of state changes by user across findings, engagements, scans, documents, comments, and team changes, with plan retention of 30, 90, or 365 days. The compliance tracking feature maps findings and controls to ISO 27001, SOC 2, Cyber Essentials, PCI DSS, and NIST and exports the evidence pack as CSV. None of those features pull EPSS from the FIRST feed automatically; the feed ingestion is yours to schedule. What the platform provides is one record on which the EPSS value, the severity vector, the owner, the lifecycle, the exception state, and the framework mapping all live so the audit query reads from the same source the operator runs from.

Where EPSS Maps to Compliance Frameworks

None of the major compliance frameworks name EPSS directly. They all expect a documented prioritisation method that takes likelihood into account, and EPSS is one of the few public, defensible signals that satisfies the expectation. Make the mapping explicit in the policy and the evidence pack stretches across multiple audits.

NIST SP 800-53 RA-5 and SI-2

RA-5 expects vulnerability scanning with response, including the use of additional inputs to supplement raw scanner output. SI-2 expects flaw remediation with documented timing. EPSS is a first-class additional input under RA-5 alongside KEV, and the EPSS-driven SLA uplift maps cleanly to SI-2 timing. Cite both controls in the policy. Our NIST framework page covers the broader context.

ISO 27001 Annex A 8.8

Annex A 8.8 (Management of technical vulnerabilities) expects identification, evaluation, and treatment of technical vulnerabilities, with a documented prioritisation method. EPSS plus CVSS plus KEV is the documented method auditors recognise as defensible. Our ISO 27001 framework page covers the wider control set.

PCI DSS v4.0 Requirement 6.3

Requirement 6.3.1 expects identification of vulnerabilities, 6.3.2 expects timely remediation, and 6.3.3 expects critical and high-rated vulnerabilities to be addressed within one month. EPSS-driven uplift on Medium findings that the model flags as likely to be exploited tightens the policy beyond the requirement floor. Our PCI DSS framework page covers the assessment context.

SOC 2 Trust Services Criteria CC7.1

CC7.1 expects monitoring of system components for vulnerabilities and a defined response. The EPSS reconciliation cadence and the EPSS-aligned remediation timing are the operating evidence a SOC 2 Type 2 report reads across the observation period, so the daily refresh record matters as much as the per-finding closure record. Our SOC 2 framework page covers the wider trust services criteria.

Common EPSS Failure Modes

Reading EPSS as a Severity Score

Programmes that use EPSS as a replacement for CVSS end up underweighting the catastrophic-impact findings that have not crossed the model's evidence threshold. The fix is to read EPSS as likelihood and CVSS as impact, and to multiply rather than substitute.

EPSS Snapshotted at Import and Never Refreshed

Programmes that pull EPSS at finding-import time and then never refresh end up with stale values that no longer match the current model output. EPSS is updated daily; programmes that keep month-old values in the queue ignore most of the model's value. The fix is a daily reconciliation against the latest feed for every CVE-bearing finding still open in the inventory.

Confusing Probability and Percentile

A policy that says "EPSS above 0.10" without specifying whether that is probability or percentile produces inconsistent decisions across teams. The fix is to name the metric in the policy explicitly and to record both values on the finding so the audit conversation is unambiguous.

Threshold Set From Vendor Defaults

A 90-percentile threshold copied from a vendor blog post produces a queue that may not match your remediation throughput. The fix is to back-test the threshold against historic findings before rolling it out, and to recalibrate quarterly against observed closure and breach rates.

EPSS Uplift Without KEV Override

Programmes that let the EPSS percentile alone decide the SLA tier miss the cases where KEV records actual exploitation that the model has not yet caught. The fix is to keep KEV as the dominant tier-up rule and EPSS as the secondary uplift.

Closure Metrics That Hide the Reopen Rate

An EPSS-uplifted finding that closes today and reopens next month after a partial fix produces a clean closure metric and a broken posture. The fix is to track reopen rate alongside closure rate. Our reopen rate research covers the durability axis.

A Four-Week Rollout for an Internal Programme

For internal security teams adding EPSS to an existing CVSS-and-KEV programme, the rollout below has worked across enterprise contexts. It assumes you already have the KEV pipeline running; if not, run the KEV rollout first and bring EPSS in afterwards.

  1. Week one: document the EPSS feed source, the daily pull cadence, and the per-finding tagging fields (probability and percentile). Update the prioritisation policy to add the EPSS uplift band (typically percentile above 90) and to specify whether the policy anchor is probability or percentile. Get sign-off from the security lead and the GRC lead.
  2. Week two: wire the ingestion. Backfill EPSS values against the existing CVE-bearing finding inventory using the most recent daily feed. Surface the EPSS-uplifted subset in the operator queue. Identify the findings that already exceed the new uplifted SLA and triage them as a one-time backlog clearance alongside any open KEV findings.
  3. Week three: tighten the SLA. Deploy the EPSS-aligned uplift for new findings and for any open EPSS-uplifted findings that have not yet breached. Update the exception register template to require the EPSS value at decision time and the EPSS-refresh re-review trigger. Wire the daily reconciliation so EPSS values on existing findings refresh against the latest feed.
  4. Week four: wire the reporting and review. Add EPSS-uplifted closure rate, EPSS-uplifted breach rate, and exception register health to the monthly programme review and the quarterly leadership read. Capture the framework mapping (NIST 800-53 RA-5/SI-2, ISO 27001 Annex A 8.8, PCI DSS 6.3, SOC 2 CC7.1) in the policy. Run a calibration pass on the first month of EPSS-uplifted closures and tune the threshold band against observed throughput.

Where the Programme Sits in the Wider Security Org

EPSS-aligned vulnerability management is one workflow inside a wider internal security organisation. It sits next to the daily operational discipline of the VM team, the engineering-side AppSec function, the GRC owner's evidence cadence, and the leadership reporting cadence the CISO produces.

For the find-track-fix-verify operator function, the workflow is the natural pairing with SecPortal for vulnerability management teams. For the AppSec function that triages scanner output for engineering ownership, SecPortal for AppSec teams covers the upstream. For the CISO or security director sponsoring the programme, the SecPortal for CISOs page covers how EPSS-driven outcomes roll up into leadership reporting. For the GRC owner who has to translate EPSS state into evidence, SecPortal for GRC and compliance teams covers the audit-side discipline.

Pair the programme with adjacent enterprise reading. The vulnerability prioritisation framework guide covers the multi-signal prioritisation theory EPSS plugs into. The CISA KEV catalog guide covers the observed-exploitation signal EPSS sits next to. The vulnerability management programme guide covers the upstream and downstream workflow. The vulnerability management program scorecard scores programme maturity across governance, detection, prioritisation, remediation, and verification. The reachability analysis guide covers the noise-reduction layer that runs before the EPSS-aligned prioritisation function consumes inputs; EPSS speaks to the CVE record, reachability speaks to the deployed code path, and a mature programme sequences them rather than treating them as substitutes. The SSVC stakeholder-specific vulnerability categorization explainer covers the action-call layer that sits above EPSS, KEV, and CVSS: SSVC consumes the EPSS percentile as one input to its Exploitation decision point and emits a Track, Track-star, Attend, or Act recommendation that translates the EPSS-aligned threshold into a defensible action call rather than a numeric band on its own. For buyers comparing dedicated risk-based vulnerability management platforms (which build proprietary exploit-likelihood scoring on top of EPSS-style data) to a single workspace that records EPSS, CVSS, KEV, and engagement context together, the SecPortal vs Kenna Security comparison covers that side-by-side.

Run EPSS-Aligned Vulnerability Management on a Single Record

EPSS alignment is mostly a recordkeeping problem in disguise. The feed is public, the threshold rule is simple, and the SLA mechanics are straightforward. What stops most programmes from getting clean EPSS evidence is that the per-finding EPSS values, the lifecycle audit trail, the exception register, the framework mapping, and the leadership read all sit on different records, so producing the evidence pack means reconciling four or five sources at audit time. SecPortal is built around a single engagement record: findings management with CVSS 3.1 calibration, the activity log for the timestamped chain of state changes across findings, engagements, scans, and team changes, compliance tracking with ISO 27001 / SOC 2 / Cyber Essentials / PCI DSS / NIST mappings and CSV export, continuous monitoring for the cadence, and AI-powered report generation when leadership wants the executive summary.

None of these features pull EPSS automatically: the feed is yours to ingest. What the platform does is keep the EPSS value, the lifecycle, the evidence, the exceptions, and the framework mapping on the same record so the audit conversation collapses into a query rather than a multi-team scramble.

Run EPSS-aligned vulnerability management on SecPortal

Stand up the engagement record in under two minutes. Free plan available, no credit card required.