Vulnerability Disclosure Programme (VDP) Setup Guide
A vulnerability disclosure programme is the cheapest, highest-leverage piece of an external security programme: it converts the inevitable arrival of unsolicited findings from a reputational risk into a managed workflow. This guide covers every component a production VDP needs: the policy itself, scope and safe harbour, the intake channel, triage and response SLAs, security.txt discovery, ISO/IEC 29147 alignment, common pitfalls, and how a VDP fits alongside pentests and bug bounty programmes. Treat the policy and the operations as a single shipped product, not two separate things.
Why Every Organisation Needs a VDP
Researchers are going to find issues whether or not you have invited them. The only question is what happens next. With a VDP, a finding lands in a structured channel, gets an acknowledgement, and routes to engineering through the same workflow as a pentest issue. Without one, the same finding lands in a sales inbox, ricochets through three people who do not own security, and is rediscovered by your team only when it surfaces on social media.
A VDP also signals organisational maturity to enterprise buyers, regulators, and insurers. CISA Binding Operational Directive 20-01 made VDPs mandatory for US federal civilian agencies. The EU NIS2 directive expects coordinated vulnerability disclosure from essential and important entities. The EU Cyber Resilience Act makes a coordinated vulnerability disclosure policy a direct manufacturer obligation for products with digital elements placed on the EU market, alongside SBOM evidence and severe incident reporting. The EU CRA vulnerability handling guide walks through Articles 13 and 14 in operational detail, including how the coordinated disclosure obligation reads against the Article 14 reporting cascade. ISO/IEC 29147 (external disclosure) and ISO/IEC 30111 (internal handling) document the practice. SOC 2 and ISO 27001 auditors look for evidence of a documented disclosure process during the controls walkthrough.
The cost is small. A VDP does not require payouts, a public bounty platform, or a large triage team. It requires a published policy, a contact channel, a triage rotation, and the same findings workflow you already use for pentest deliverables.
VDP vs Bug Bounty vs Coordinated Disclosure
These three terms are related but distinct. Picking the right one changes scope, cost, and operational load.
| Programme | Pays researchers | Scope | Operational load |
|---|---|---|---|
| VDP | No | Public assets, broad | Low to moderate, mostly triage |
| Bug bounty (private) | Yes | Curated assets, defined | Moderate, payout management |
| Bug bounty (public) | Yes | Public, structured | High, sustained triage and payouts |
| Coordinated disclosure | Sometimes | Specific products, vendor-led | Per-case, advisory publishing |
A VDP is the foundation. Bug bounties stack on top of it. Coordinated disclosure is the mode used when you ship products other organisations deploy and you publish CVE-tracked advisories. Most companies start and end at a VDP, and that is the right answer.
The VDP Policy: What It Must Contain
The policy is the public document. ISO/IEC 29147 and the CISA VDP template both converge on a similar structure. A complete VDP policy contains the following sections.
- Introduction and purpose statement
- Authorisation and safe harbour
- In-scope assets
- Out-of-scope assets and vulnerability classes
- Permitted and prohibited testing methods
- Reporting requirements (what a useful report looks like)
- Reporting channel and PGP key
- Coordinated disclosure timeline
- Public credit and recognition policy
- Response commitments and SLAs
- Legal limitations and jurisdictional notes
- Programme contact and escalation path
Keep it short, declarative, and free of marketing language. Researchers read these policies carefully because their legal cover depends on them. The free vulnerability disclosure policy template ships a copy-ready twelve-section policy that already covers the structure above and adds the regulator-coordination, governance, and document-control sections that ISO/IEC 30111, CISA BOD 20-01, and EU CRA Article 13 expect.
Safe Harbour: The Clause That Makes the Policy Work
Safe harbour is the public commitment that good-faith research conducted within the policy will not be met with legal action. Without it, researchers in many jurisdictions face real exposure under computer misuse statutes, and most will choose silence.
Example safe harbour clause
We consider security research and vulnerability disclosure activities conducted in accordance with this policy to be authorised conduct under the Computer Misuse Act 1990, the Computer Fraud and Abuse Act, and any analogous statutes. We will not pursue civil action or initiate a complaint to law enforcement for accidental, good faith violations of this policy. If legal action is initiated by a third party against you for activities conducted in good faith under this policy, we will take reasonable steps to make it known that your actions were authorised.
Have legal review the wording. The intent is to be unambiguous: research that follows the policy is permitted, the organisation will not weaponise the law against researchers, and there is a clear path back to safety if scope is accidentally crossed.
Defining Scope and Out-of-Scope
Scope determines what researchers test and what you act on. Be explicit on both sides.
In scope (typical)
- Production web applications and APIs at named domains
- Mobile applications published under your developer accounts
- External IP ranges you own and operate
- Cloud assets you control (S3 buckets, CloudFront distributions, exposed services)
- Authentication and authorisation flaws, injection, IDOR, SSRF, account takeover
Out of scope (typical)
- Third-party services not under your direct control
- Staff phishing, social engineering, physical security
- Denial of service, brute force without rate-limit context, volumetric attacks
- Findings that require non-default configuration on the victim side
- Self-XSS, clickjacking on pages without sensitive actions
- Missing best-practice headers without demonstrable impact
- Reports generated solely by automated scanners with no manual validation
Out-of-scope categories should reflect what your team will not act on. Listing them saves researchers wasted time and your triage rotation a steady drip of low-value submissions.
security.txt and the Discovery Layer
Researchers check for a security.txt file at /.well-known/security.txt before they email anyone. RFC 9116 defines the format. Hosting it is a 60 second job and a measurable lift in report quality.
Contact: mailto:security@example.com Contact: https://example.com/security/report Expires: 2027-05-06T00:00:00.000Z Encryption: https://example.com/.well-known/pgp-key.txt Preferred-Languages: en Canonical: https://example.com/.well-known/security.txt Policy: https://example.com/security Acknowledgments: https://example.com/security/hall-of-fame
Set an expiry no more than a year out, monitor it, and treat the file as production infrastructure. An expired security.txt is a signal that the programme is unmaintained and pushes researchers back toward unstructured channels. The free security.txt generator builds a valid RFC 9116 file from a guided form and validates each field before you publish it.
The Intake Channel
A structured intake form outperforms a shared inbox. It enforces required fields, cuts ambiguity, and gives triage a consistent record to work from. At minimum, capture:
- Researcher contact and preferred recognition handle
- Affected asset (URL, IP, application, version)
- Vulnerability class
- Step-by-step reproduction
- Proof of concept (request/response, screenshot, video)
- Impact assessment (what an attacker could do, against whom)
- Disclosure timeline preferences
- Whether the researcher used scanners or manual techniques
Offer an email channel as a fallback (with a published PGP key) for researchers who prefer it. Both feed the same triage workflow.
Triage Workflow and SLAs
Triage is where most VDPs succeed or fail. The policy promises a response; the workflow is what delivers it. Define SLAs and instrument them.
| Stage | Typical SLA | Output |
|---|---|---|
| Acknowledgement | 2 business days | Auto-reply plus human confirmation |
| Initial triage | 5 business days | Validated, duplicate, or out of scope |
| Severity and remediation plan | 10 business days | CVSS, owner, target fix date |
| Fix and verification | By severity SLA | Critical 7 to 14 days, High 30 days, Medium 60 to 90, Low 90+ |
| Closure and recognition | 5 business days post-fix | Researcher notified, hall of fame entry if opted in |
For severity, use CVSS 3.1 (or the CVSS calculator) and a documented prioritisation framework. Keep the same severity model as your pentest deliverables so engineering does not switch contexts.
From Inbound Report to Tracked Finding
A report is not a finding until it has been validated, deduplicated against the existing backlog, and recorded against the right asset. This is where most policies under-deliver: the inbox is monitored, but reports are not flowing into the same workflow that drives remediation.
Treat VDP intake as another input to your findings system. SecPortal findings management stores VDP reports next to pentest findings with the same severity, evidence, status, and remediation fields, so triage, retest, and reporting all use one source of truth. For the document handover, the branded client portal and AI report generation cover external-facing summaries when a finding warrants a published advisory.
For the verification step after engineering ships a fix, use the same retest workflow you would use for an engagement. See how to retest vulnerabilities for the operational pattern.
For the full operational view of running the VDP day to day (intake clocks, triage dispositions, severity-tiered SLAs, researcher portal, and audit-ready closure records), see the VDP management workflow.
Recognition, Public Credit, and Hall of Fame
A VDP without recognition is harder to attract repeat research to. You do not need to pay; you do need to acknowledge. Make recognition opt-in, default to anonymous, and give researchers an easy way to share or link the credit.
- Public hall of fame on the policy page or a sub-page
- Optional CVE assignment for vendor products you ship
- Letter of acknowledgement on company letterhead, signed
- Swag for accepted, non-trivial reports
- Invitation to a private bug bounty programme if you launch one
Researchers list VDP credits on CVs and conference talks. The cost of a small swag programme is trivial relative to the report quality lift it produces.
Coordinated Disclosure Timelines
Most VDPs commit to a coordinated disclosure window of 90 days from acknowledgement, with extensions for critical issues that require complex changes. The principle is simple: agree timelines up front, communicate progress, and ship the fix before the window closes. Disagreements over disclosure dates are the most common failure mode in mature VDPs and the reason researchers go public.
- Acknowledgement to fix: 90 days default
- Extension policy: written, capped, with researcher acknowledgement
- Public advisory: published when the fix ships, with credit if opted in
- CVE assignment: for vendor products you ship under your namespace
- Disclosure freeze: only for active in-the-wild exploitation, not engineering convenience
CVE assignment is the upstream layer this list assumes. A vendor that ships products to external customers will eventually decide whether to apply to the CVE Numbering Authority partner programme as the CNA for its own products, file through CNA-LR (MITRE) for the rare external-discovery, or coordinate with an upstream Root CNA when the affected component sits inside another vendor's scope. Our CVE Numbering Authority explainer covers the CNA hierarchy, the application path, the scope statement work, the embargo and coordination discipline a CNA-applicant VDP needs in place, and the CVE record format the CVE Services API publishes against.
Common VDP Pitfalls
- No safe harbour clause. Without it, the policy is theatre. Researchers will not engage.
- Vague scope. Either too broad (drowning the team in third-party noise) or too narrow (so restrictive nobody bothers).
- Shared inbox triage. Reports stall in a generic security@ inbox with no rotation, no SLA, and no backstop.
- Severity drift. VDP findings get a different severity scale than pentest findings, so priorities collide and the highest-risk report sits behind ten medium pentest items.
- Promising payouts informally. Either run a bug bounty or do not pay. Ad hoc payouts breed disputes and set precedents you cannot maintain.
- Missing security.txt. Reports route through sales, support, or LinkedIn instead of triage. Inevitable.
- No retest commitment. Researchers report a fix is incomplete and the team rejects the reopen because the policy did not cover it.
- Disclosure brinkmanship. The team asks for indefinite extensions, the researcher publishes, the company is surprised. Manage the clock from day one.
Launch Checklist
- Policy drafted, legal reviewed, leadership signed off
- Safe harbour clause unambiguous and jurisdictionally correct
- In-scope and out-of-scope assets enumerated
- Permitted and prohibited testing methods listed
- Intake channel live with structured form and PGP fallback
- security.txt published at /.well-known/security.txt with current expiry
- Policy page reachable at /security and linked from the footer
- Triage rotation named, SLAs documented, escalation path tested
- Findings workflow integrated with the same severity model as pentest output
- Recognition policy decided and hall of fame page ready
- Internal runbook for on-call triage written and rehearsed
- Report dry-run with a friendly external researcher before public launch
Frequently Asked Questions
Run VDP reports through the same workflow as your pentest findings
SecPortal handles intake, triage, severity, retest, and reporting in one place so VDP submissions slot into the same lifecycle as engagement findings. See pricing or start free.
Get Started Free