Penetration Testing Test Plan Template decompose the agreed scope into the work the team will actually do
A free, copy-ready penetration testing test plan template. Twelve structured sections covering engagement references, objectives and success criteria, in-scope assets and asset categorisation, team and tooling baseline, methodology category mapping (PTES, NIST SP 800-115, OWASP WSTG, OWASP MASTG, OWASP ASVS, CREST DPT), schedule and reporting cadence, entry and exit criteria per phase, evidence and reproducibility expectations, risk register with stop-conditions, plan version history, retest plan with acceptance criteria, and sign-off. Inherits scope from the Statement of Work, operational rules from the Rules of Engagement, and authorisation from the engagement letter.
Run the engagement on the plan it was scoped against
SecPortal stores the test plan alongside the SOW, ROE, engagement letter, findings, draft and final reports, and retest evidence. Plan versions, peer reviews, and client acknowledgements all on one record. Free plan available.
No credit card required. Free plan available forever.
Full template
Copy the full test plan template
Twelve structured sections. The plan inherits scope from the executed statement of work, operational rules from the executed rules of engagement, and authorisation from the executed engagement letter. Replace every {{PLACEHOLDER}} before peer review.
1. Engagement references and document control
The plan opens with the engagement reference and the documents it inherits from. PTES Section 1.7, NIST SP 800-115 Section 4.1, and the CREST Defensible Penetration Test specification all expect the test plan to point back to the executed SOW, ROE, and engagement letter rather than restate them.
PENETRATION TESTING TEST PLAN
Engagement reference: {{ENGAGEMENT_REFERENCE}}
Plan version: {{PLAN_VERSION}}
Plan date: {{PLAN_DATE}}
Author (engagement lead): {{ENGAGEMENT_LEAD_NAME}}, {{ENGAGEMENT_LEAD_TITLE}}
Peer reviewer: {{PEER_REVIEWER_NAME}}, {{PEER_REVIEWER_TITLE}}
Client acknowledgement: {{CLIENT_ACK_NAME}}, {{CLIENT_ACK_TITLE}}, acknowledged {{CLIENT_ACK_DATE}}
This plan inherits scope, deliverables, and timeline from the executed Statement of Work and the operational rules from the executed Rules of Engagement. It is the operational decomposition of that scope into test cases, schedule, assignments, and acceptance criteria.
Source documents:
- Statement of Work reference: {{SOW_REFERENCE}}, executed {{SOW_DATE}}
- Rules of Engagement reference: {{ROE_REFERENCE}}, executed {{ROE_DATE}}
- Engagement Letter reference: {{ENGAGEMENT_LETTER_REFERENCE}}, executed {{ENGAGEMENT_LETTER_DATE}}
- Vendor proposal of record: {{PROPOSAL_REFERENCE}}, dated {{PROPOSAL_DATE}}
Where this plan conflicts with the SOW or ROE, the executed contract documents govern. The plan is to be amended to align with the contract rather than the contract reinterpreted to align with the plan.
2. Engagement objectives and success criteria
States the outcome the client and the firm have agreed the engagement is for. PTES Section 1.5 (Goals) is the upstream reference. Without explicit objectives the engagement drifts toward whatever the most senior tester finds interesting, which is not the same as what the client paid for.
Primary objectives:
1. {{PRIMARY_OBJECTIVE_1}} (for example: identify exploitable vulnerabilities in the in-scope external web application that would let an unauthenticated attacker reach data covered by the agreed data classification policy)
2. {{PRIMARY_OBJECTIVE_2}} (for example: validate that authenticated users cannot escalate privileges across tenant boundaries in the multi-tenant API)
3. {{PRIMARY_OBJECTIVE_3}} (for example: produce a defensible attestation aligned with the relevant scheme requirement so the client can present the deliverable to their auditor or regulator)
Secondary objectives (lower priority, addressed only after primary objectives are met):
- {{SECONDARY_OBJECTIVE_1}}
- {{SECONDARY_OBJECTIVE_2}}
Engagement success criteria (the conditions under which the engagement is considered complete and successful):
- All in-scope assets have been tested against the methodology categories listed in Section 5 of this plan.
- Findings have been reproduced, evidenced, severity-rated against the agreed rubric, and validated through the firm internal peer review.
- The deliverables listed in the SOW have been produced and accepted by the client per the SOW acceptance criteria.
- Any retests within the agreed retest window have been executed against the retest acceptance criteria in Section 11 of this plan.
Out of scope objectives (explicit non-goals, recorded so the report does not get measured against them):
- {{NON_GOAL_1}} (for example: the engagement does not provide a guarantee of absence of vulnerabilities outside the methodology categories applied)
- {{NON_GOAL_2}}
3. In-scope assets and asset categorisation
Decomposes the SOW scope into the asset units the team will test. Each asset gets categorised so the methodology references in Section 5 map cleanly to it. Any ambiguity here surfaces later as coverage gaps.
In-scope asset categories (per SOW Section 3, decomposed for testing):
A. External web applications
- Asset: {{WEB_APP_1_NAME}} ({{WEB_APP_1_URL}})
Description: {{WEB_APP_1_DESCRIPTION}}
Stack and notable technologies: {{WEB_APP_1_STACK}}
Authentication model: {{WEB_APP_1_AUTH}}
Data classification handled: {{WEB_APP_1_DATA_CLASS}}
Test depth: {{WEB_APP_1_DEPTH}} (PTES Level 1 / 2 / 3)
- Asset: {{WEB_APP_2_NAME}} ({{WEB_APP_2_URL}})
[repeat the block above for each in-scope web application]
B. APIs
- Asset: {{API_1_NAME}}
Specification: {{API_1_SPEC}} (OpenAPI / Swagger / Postman collection / source code reference)
Authentication model: {{API_1_AUTH}}
Test depth: {{API_1_DEPTH}}
C. Internal network and infrastructure
- Asset: {{INTERNAL_NETWORK_1_RANGE}}
Description: {{INTERNAL_NETWORK_1_DESCRIPTION}}
Authorised access path: {{INTERNAL_NETWORK_1_ACCESS}} (jump host / VPN / on-site)
Test depth: {{INTERNAL_NETWORK_1_DEPTH}}
D. Cloud accounts
- Asset: {{CLOUD_ACCOUNT_1_NAME}}
Provider: {{CLOUD_PROVIDER}} (AWS / Azure / GCP / other)
Authorised review depth: {{CLOUD_REVIEW_DEPTH}} (read-only review / read plus configuration validation / authenticated workload exploitation)
E. Mobile applications
- Asset: {{MOBILE_APP_1_NAME}}
Platforms: {{MOBILE_APP_1_PLATFORMS}} (iOS / Android / both)
Distribution: {{MOBILE_APP_1_DISTRIBUTION}} (production store / staging / supplied build)
F. Source code (where in scope per the SOW)
- Repository: {{REPO_1_NAME}}
Languages and frameworks: {{REPO_1_STACK}}
Review focus: {{REPO_1_FOCUS}} (full SAST sweep / targeted module review / threat-led code review)
Out of scope assets (explicit, recorded so a finding produced against them does not survive into the report):
- {{OUT_OF_SCOPE_1}}
- {{OUT_OF_SCOPE_2}}
Third-party hosted assets within scope (each requires the third-party permission referenced in the engagement letter):
- {{THIRD_PARTY_ASSET_1}}, permission reference {{THIRD_PARTY_PERMISSION_1}}
4. Team, accreditations, and tooling baseline
Names the testers, the accreditations they bring, and the tools they will operate with. Regulated schemes expect the named team in the engagement letter to match the team in the test plan. Tooling baseline catches surprises later when a finding evidence reference points at a tool the client did not know was being used.
Engagement team:
1. {{TESTER_1_NAME}}, {{TESTER_1_ROLE}}, accreditations {{TESTER_1_ACCREDITATIONS}}
Assigned to: {{TESTER_1_ASSETS}}
Allocated tester-days: {{TESTER_1_DAYS}}
2. {{TESTER_2_NAME}}, {{TESTER_2_ROLE}}, accreditations {{TESTER_2_ACCREDITATIONS}}
Assigned to: {{TESTER_2_ASSETS}}
Allocated tester-days: {{TESTER_2_DAYS}}
3. {{TESTER_3_NAME}}, {{TESTER_3_ROLE}}, accreditations {{TESTER_3_ACCREDITATIONS}}
Assigned to: {{TESTER_3_ASSETS}}
Allocated tester-days: {{TESTER_3_DAYS}}
Engagement lead reserves the right to rotate assignments inside the team during the engagement to balance workload, subject to the substitution clause in the engagement letter.
Tooling baseline (the toolset the team intends to operate with against this engagement):
Reconnaissance and discovery: {{RECON_TOOLS}} (for example: nmap, masscan, amass, subfinder, httpx)
Web application testing: {{WEB_APP_TOOLS}} (for example: Burp Suite Professional, OWASP ZAP, ffuf, sqlmap)
API testing: {{API_TOOLS}} (for example: Postman, Burp Suite, custom client per OpenAPI spec)
Internal network testing: {{INTERNAL_TOOLS}} (for example: nmap, CrackMapExec, Impacket suite, Responder)
Cloud configuration review: {{CLOUD_TOOLS}} (for example: Prowler, ScoutSuite, Pacu, native CLI)
Mobile application testing: {{MOBILE_TOOLS}} (for example: MobSF, Frida, Objection, platform debug tooling)
Source code review (where in scope): {{CODE_REVIEW_TOOLS}} (for example: Semgrep, language-native SAST, manual review against threat model)
Custom tooling (any in-house or non-public tool the team intends to deploy against the engagement, declared per ROE Section 5):
- {{CUSTOM_TOOL_1_NAME}}: {{CUSTOM_TOOL_1_DESCRIPTION_AND_LICENSE}}
- {{CUSTOM_TOOL_2_NAME}}: {{CUSTOM_TOOL_2_DESCRIPTION_AND_LICENSE}}
5. Methodology references and category mapping
Maps each in-scope asset to the specific methodology categories the team will apply. PTES, NIST SP 800-115, OWASP WSTG, OWASP MASTG, OWASP ASVS, and CREST DPT define many techniques; the plan picks which apply, against which asset, on this engagement.
Primary methodology references (full text behind the headline reference, where applicable):
- PTES (Penetration Testing Execution Standard), pre-engagement through reporting.
- NIST SP 800-115 Technical Guide to Information Security Testing and Assessment.
- OWASP Web Security Testing Guide (WSTG) for web application testing.
- OWASP Mobile Application Security Testing Guide (MASTG) for mobile testing.
- OWASP Application Security Verification Standard (ASVS) for verification depth alignment.
- CREST Defensible Penetration Test specification for scheme alignment.
Per-asset category map:
A. External web applications
Methodology categories applied:
- WSTG-INFO (Information gathering)
- WSTG-CONFIG (Configuration and deployment management)
- WSTG-IDNT (Identity management)
- WSTG-ATHN (Authentication)
- WSTG-ATHZ (Authorisation)
- WSTG-SESS (Session management)
- WSTG-INPV (Input validation)
- WSTG-ERRH (Error handling)
- WSTG-CRYP (Cryptography)
- WSTG-BUSL (Business logic)
- WSTG-CLNT (Client side)
ASVS verification level: {{ASVS_LEVEL}} (L1 / L2 / L3)
Categories explicitly out of scope: {{WEB_OUT_OF_SCOPE_CATEGORIES}}
B. APIs
- WSTG-APIT (API testing) plus the OWASP API Security Top 10 categories.
- Authentication and authorisation testing per WSTG-ATHN, WSTG-ATHZ.
- Mass assignment and broken object property level authorisation per OWASP API Security Top 10.
C. Internal network and infrastructure
- PTES Section 4 (Vulnerability analysis), Section 5 (Exploitation), Section 6 (Post-exploitation).
- NIST SP 800-115 Section 5 (Examination techniques).
- Active Directory testing scope: {{AD_TESTING_SCOPE}}
- Lateral movement scope and stop-conditions: {{LATERAL_MOVEMENT_SCOPE}}
D. Cloud accounts
- Provider-specific configuration review against {{CLOUD_BASELINE}} (CIS Benchmark / vendor reference / customer-specific baseline).
- Identity and access review per least-privilege expectations in the SOW.
E. Mobile applications
- MASTG categories applied: {{MASTG_CATEGORIES}}
- MASVS verification level: {{MASVS_LEVEL}}
F. Source code (where in scope)
- SAST coverage scope: {{SAST_SCOPE}}
- Manual review focus areas: {{MANUAL_CODE_REVIEW_FOCUS}}
For each category not applicable to a given asset, the plan records the reason ("not in scope per SOW", "not present in asset", "covered by separate engagement"). Categories with no record are coverage gaps at peer review.
6. Schedule, checkpoints, and reporting cadence
Day-by-day or week-by-week sequencing of the work, plus the checkpoints where the engagement lead reports progress. The schedule is what the daily standup measures itself against; without it, "we are tracking on schedule" is opinion rather than evidence.
Engagement schedule:
Phase 1: Pre-engagement and onboarding ({{PHASE_1_DATES}})
- Confirm engagement letter executed and authorisation chain complete.
- Validate target reachability from authorised source IPs.
- Validate credentials handover and rotation per the credential handover procedure.
- Confirm scope and asset list against the asset register reviewed during kickoff.
Phase 2: Reconnaissance and surface mapping ({{PHASE_2_DATES}})
- Asset categories: {{PHASE_2_ASSETS}}
- Methodology categories: {{PHASE_2_METHODOLOGY}}
- Daily checkpoint: {{PHASE_2_CHECKPOINT}}
Phase 3: Active testing ({{PHASE_3_DATES}})
- Asset categories: {{PHASE_3_ASSETS}}
- Methodology categories: {{PHASE_3_METHODOLOGY}}
- Daily checkpoint: {{PHASE_3_CHECKPOINT}}
- Critical and high finding communication: per ROE Section 6 SLAs (same business day for critical, one business day for high).
Phase 4: Reporting and review ({{PHASE_4_DATES}})
- Draft report internal peer review by {{INTERNAL_REVIEW_DATE}}.
- Draft report to client by {{DRAFT_REPORT_DATE}}.
- Debrief and readout meeting on {{DEBRIEF_DATE}}.
- Final report after debrief feedback by {{FINAL_REPORT_DATE}}.
Phase 5: Retest window (where in scope) ({{PHASE_5_DATES}})
- Retest scope: see Section 11 of this plan.
- Retest acceptance criteria: see Section 11.
Reporting cadence to the Authorising Party:
- Daily: short status note in the engagement workspace, surfaced through the client portal.
- Weekly: structured progress report aligned with the kickoff meeting agenda template, attended by the client engagement lead and the testing engagement lead.
- On critical or high findings: immediate communication per ROE Section 6 SLAs, regardless of cadence.
Public holidays, blackout periods, and change-freeze windows (per the engagement letter):
- {{HOLIDAY_OR_BLACKOUT_1}}
- {{HOLIDAY_OR_BLACKOUT_2}}
7. Entry and exit criteria per phase
Records the conditions that must be met before each phase starts and before each phase is signed off. This is the discipline that prevents an engagement from sliding into the next phase while the previous phase is still incomplete, which is one of the most common causes of overrun.
Entry criteria (must be met before testing starts):
- Statement of Work executed and reflected in this plan version.
- Rules of Engagement executed and reflected in this plan version.
- Engagement letter executed and dated within the testing window in Section 4 of the engagement letter.
- This test plan acknowledged in writing by the client representative.
- Credentials, asset access, and any scope-conditional documentation handed over per the credential handover procedure.
- Source IPs, jump hosts, and any VPN configurations in place and tested for reachability.
- The engagement lead has confirmed the team and tooling baseline are ready.
- Any third-party permissions for hosted assets are referenced in the engagement letter and are valid for the testing window.
Exit criteria for the active testing phase (must be met before reporting begins):
- Each in-scope asset has been tested against the methodology categories assigned to it in Section 5.
- Each finding has been reproduced under controlled conditions with evidence captured per the evidence expectations in Section 8.
- Each finding has been peer-reviewed inside the firm before it leaves the workspace toward the client.
- Each finding has a severity rating against the agreed rubric, with the severity rationale recorded.
- Any deviations from the planned methodology coverage have been recorded in the change log in Section 10.
Exit criteria for the engagement (must be met before invoice and closure):
- Final report delivered and accepted per the SOW acceptance criteria.
- Retest, where in scope, executed and findings updated to reflect retest outcome.
- Engagement evidence retained per the evidence retention clause in the SOW or ROE.
- Client portal access closed or transitioned per the post-engagement access plan.
- Internal post-engagement review (lessons learned, plan accuracy, time variance) completed by the engagement lead.
8. Evidence capture and reproducibility expectations
Every finding the team records must be reproducible by a second tester reading the evidence. Defines what evidence to capture and where to capture it. CREST DPT, FedRAMP, and most enterprise procurement frameworks expect this discipline; without it, findings degrade into one tester memory.
Evidence expectations per finding:
- Request and response capture (raw HTTP, raw network packet, screenshot of the application state where relevant).
- Reproduction steps written so a second tester can follow them without coaching.
- Tooling references captured (which tool produced the original signal, which manual step confirmed it).
- Affected asset reference (URL, hostname, IP, account identifier, repository commit hash) so the finding scopes to a specific asset, not a class of asset.
- Severity rationale: which CVSS or scheme-specific rubric components apply, and why.
- Compensating controls noted where applicable (WAF rule, network segmentation, access control) and the impact on the severity score.
Evidence storage:
- All evidence lives on the engagement record in the workspace, not on tester local drives or external chat platforms.
- Sensitive evidence (credentials in transit, PII captured during testing) is handled per the data classification expectations in the SOW.
- Evidence remains accessible to the engagement team and the firm peer review process for the retention period defined in the SOW.
Reproducibility test:
- Before any finding leaves the workspace toward the client report, a second tester must be able to reproduce the finding from the evidence as written, without any synchronous handover.
- Findings that fail the reproducibility test go back to the originating tester for evidence improvement before they can be released to the report.
9. Risk register: testing risks and mitigations
Some testing carries a risk of disruption (denial of service against fragile services, data corruption from injection probes against production, account lockouts from authentication brute force). The risk register records the risks, the mitigations, and the stop-conditions before the team picks up tooling.
Engagement risks identified during planning:
1. Risk: {{RISK_1_DESCRIPTION}} (for example: SQL injection probes against the production database may produce malformed test rows)
Likelihood: {{RISK_1_LIKELIHOOD}} (Low / Medium / High)
Impact: {{RISK_1_IMPACT}} (Low / Medium / High)
Mitigation: {{RISK_1_MITIGATION}} (for example: probes will be conducted with read-only payloads first; write-payload probes only against the staging tenant)
Stop-condition: {{RISK_1_STOP_CONDITION}} (for example: any sign of write impact triggers an immediate stop-test per ROE Section 9)
2. Risk: {{RISK_2_DESCRIPTION}}
Likelihood: {{RISK_2_LIKELIHOOD}}
Impact: {{RISK_2_IMPACT}}
Mitigation: {{RISK_2_MITIGATION}}
Stop-condition: {{RISK_2_STOP_CONDITION}}
3. Risk: {{RISK_3_DESCRIPTION}}
Likelihood: {{RISK_3_LIKELIHOOD}}
Impact: {{RISK_3_IMPACT}}
Mitigation: {{RISK_3_MITIGATION}}
Stop-condition: {{RISK_3_STOP_CONDITION}}
Stop-test triggers (per ROE Section 9):
- Confirmed sign of customer impact outside the engagement scope (production outage, customer-visible degradation, third-party complaint).
- Discovery of evidence of an active third-party intrusion that pre-dates this engagement.
- Discovery of regulatory-reportable incident conditions outside the engagement scope.
- Any condition under which the engagement lead, the peer reviewer, or the Authorising Party representative invokes the stop-test clause.
Stop-test handling: pause testing, preserve evidence as-is, communicate within the SLA in ROE Section 6, do not resume until the Authorising Party representative authorises continuation in writing.
10. Change log and plan version history
The plan is a living document during the engagement. Scope changes, schedule shifts, methodology adjustments, team rotations all get captured here. Plans without a change log read, after the engagement, as evidence the team did not notice the changes that happened.
Plan version history:
Version 1.0, dated {{INITIAL_PLAN_DATE}}
- Initial plan authored by {{ENGAGEMENT_LEAD_NAME}}, peer reviewed by {{PEER_REVIEWER_NAME}}.
- Acknowledged by client representative {{CLIENT_ACK_NAME}} on {{CLIENT_ACK_DATE}}.
Version {{NEXT_VERSION}}, dated {{NEXT_VERSION_DATE}}
- Reason for change: {{CHANGE_REASON}} (for example: SOW change order to add an in-scope API)
- Impact: {{CHANGE_IMPACT}} (for example: additional 2 tester-days, schedule extension by 1 business day, methodology categories WSTG-APIT added)
- Reference to change order: {{CHANGE_ORDER_REFERENCE}}, executed {{CHANGE_ORDER_DATE}}
- Acknowledged by client representative {{CHANGE_ACK_NAME}} on {{CHANGE_ACK_DATE}}
Repeat the version block above for each plan version produced during the engagement. Plan versions are referenced by version number rather than by date, so a finding produced under plan version 1.2 references the active plan version on the date the finding was raised.
11. Retest plan and acceptance criteria
Where retest is in scope per the SOW, the retest plan defines what gets retested, how the team validates remediation, and what counts as resolved. Retest acceptance criteria are agreed up front rather than improvised after the initial test closes; this protects the firm from retesting at a loss and the client from a retest that signs off issues that are not actually resolved.
Retest scope (where the SOW provides for a retest within an agreed window):
- Findings in scope for retest: all critical and high severity findings produced under this engagement, plus medium severity findings on assets where the client has indicated a remediation push.
- Findings out of scope for retest: informational findings, accepted-risk findings, and findings the client has indicated will be tracked through their internal vulnerability management programme rather than retested under this engagement.
Retest acceptance criteria per finding:
- The originating evidence steps no longer reproduce the issue (the technical condition is gone).
- Where the remediation is a compensating control rather than a root-cause fix, the compensating control is verified to actually mitigate the original exploitability path, not merely change the surface signature of the finding.
- The remediation has been applied to all affected assets, not only the asset the finding was originally produced against.
- A reproducibility note records the retest steps and the evidence that supports the retest outcome.
Retest schedule:
- Remediation window from final report acceptance to retest readiness: {{REMEDIATION_WINDOW}}
- Retest window dates: {{RETEST_WINDOW_DATES}}
- Retest team (typically the originating tester or a peer-reviewed equivalent): {{RETEST_TEAM}}
- Retest deliverable: addendum to the final report covering the retest outcome per finding, plus an updated attestation if the engagement scope includes one.
Retest outcomes per finding:
- Resolved: technical condition is gone, evidence supports it, finding closed in the workspace.
- Resolved with compensating control: original condition is mitigated by a control rather than fixed at root, severity adjusted, finding remains visible at adjusted severity.
- Not resolved: remediation incomplete or insufficient, finding remains open, retest report explains what is still required.
- Not retested in scope: finding falls outside the retest scope above, finding remains in the workspace at its original severity for the client to track.
12. Sign-off and acknowledgement
Closing block that captures the firm internal sign-off and the client acknowledgement. The client acknowledgement is not a contract amendment; it is a confirmation that the operational decomposition matches the agreed scope and rules.
Authored by:
Engagement lead: {{ENGAGEMENT_LEAD_NAME}}, {{ENGAGEMENT_LEAD_TITLE}}
Signature: ____________________________
Date: ________________________________
Peer-reviewed by:
Peer reviewer: {{PEER_REVIEWER_NAME}}, {{PEER_REVIEWER_TITLE}}
Signature: ____________________________
Date: ________________________________
Acknowledged by:
Client representative: {{CLIENT_ACK_NAME}}, {{CLIENT_ACK_TITLE}}
Signature: ____________________________
Date: ________________________________
Acknowledgement scope: the client representative confirms that the operational decomposition above aligns with the executed Statement of Work, Rules of Engagement, and engagement letter, and that the schedule, contacts, and entry criteria reflect the client environment as at the date of acknowledgement. This acknowledgement is not a variation of the SOW; variations are handled through the change control mechanism in the SOW and reflected in the change log in Section 10 of this plan.
How to use this template
Confirm the executed statement of work, rules of engagement, and engagement letter are in place. The test plan is the operational decomposition of those documents, not a substitute for them. If any of the three is unsigned, the plan cannot reach client acknowledgement.
Use the validated scoping output to populate Section 3 (in-scope assets) and Section 4 (team and tester-days). The pentest scoping calculator produces the asset count, complexity, and depth assumptions the plan inherits.
Map each in-scope asset in Section 3 to the methodology categories in Section 5. Categories without a record are coverage gaps at peer review; categories that fail to map back to an in-scope asset are scope creep.
Build the schedule in Section 6 against the testing window in the engagement letter. The schedule is what the daily standup and the client status report measure themselves against. The vulnerability remediation SLA calculator produces the severity rubric the active testing phase will report against.
Run a peer review against the entry and exit criteria in Section 7 before the plan goes to the client. Plans without explicit entry and exit criteria slide into the next phase while the previous phase is still incomplete; this is one of the most common causes of engagement overrun.
Where the engagement runs under a regulated scheme (CHECK, CREST OVS, CREST STAR, FedRAMP, DORA TLPT, TIBER-EU), confirm that the named team in Section 4 matches the named team in the engagement letter and that the methodology references in Section 5 align with the scheme expectations.
Treat the plan as a living document. Scope changes, schedule shifts, methodology adjustments, and team rotations get captured in the change log in Section 10. Plans without a change log read, after the engagement, as evidence the team did not record the changes that happened.
Methodology and scheme references
PTES Section 1.7 (Pre-engagement test plan) is the upstream reference for the operational decomposition this template produces. The SecPortal PTES framework page covers the methodology with operator-first context.
NIST SP 800-115 Section 4 (Planning the assessment) covers the technical assessment plan in equivalent depth. The SecPortal NIST SP 800-115 framework page walks the standard against testing-firm practice.
OWASP Web Security Testing Guide categories sit behind Section 5.A of the template. The SecPortal OWASP framework page covers the OWASP Top 10 mapping that complements the WSTG categories.
CREST Defensible Penetration Test specification expects evidence of a written plan with named accredited testers and a peer review trail. The SecPortal CREST penetration testing framework page covers the scheme-specific expectations the plan needs to align with.
For TIBER-EU and DORA TLPT engagements in financial services, see the SecPortal TIBER-EU framework page and the DORA framework page. Both schemes expect a documented test plan as part of the engagement evidence package.
Where the test plan sits in the engagement
The clean paper trail for a regulated penetration testing engagement runs RFP, proposal, SOW, ROE, engagement letter, test plan, kickoff, active testing, debrief, final report, retest. The test plan is the operational decomposition that converts the contract documents into the work the team will do. It pairs with the kickoff meeting where the client representative acknowledges the plan in person and the debrief meeting where the team reads the outcome back against the plan it was run from.
This template is provided as a starting point for a penetration testing test plan. It is not a substitute for the SOW, the ROE, or the engagement letter, and it does not constitute legal advice. Have the final plan reviewed by the engagement lead and the firm peer reviewer, and aligned with the methodology references appropriate to the scheme or framework the engagement is run under.