Incident Response Tabletop Exercise Template one package for charter, scenarios, injects, decisions, scoring, and after-action review
A free, copy-ready incident response tabletop exercise template. Twelve structured sections covering exercise charter and scope, roles and responsibilities, scenario selection criteria with eight-lane scenario library, pre-exercise read-ahead pack, exercise structure and timing, inject schedule with technical and business pressure, decision capture template, observer scoring rubric across six dimensions, after-action report template, action item ledger, governance review cadence, and evidence pack with audit trail. Aligned with ISO/IEC 27001 Annex A 5.24, A 5.26, and A 5.27, SOC 2 CC7.4 and CC7.5, PCI DSS Requirement 12.10.2, NIST SP 800-61 Rev. 2 Section 3.2, NIST SP 800-53 IR-2 and IR-3, HIPAA 164.308(a)(7), NIS2 Article 21, DORA Articles 25 and 26, and the standard expectations under FedRAMP and HITRUST.
Run the tabletop programme on the same record as the rest of the security work
SecPortal carries the exercise charter, the scenario pack, the after-action report, and the action item ledger on one workspace so the audit read of incident response testing and the operational read are the same record. Free plan available.
No credit card required. Free plan available forever.
Twelve sections that turn a tabletop discussion into defensible audit evidence
An incident response tabletop exercise is the discussion-based simulation in which the people who would respond to a real security incident walk a realistic scenario together, declare the decisions they would make, invoke the authority the playbook assigns, and produce a durable record the audit and the leadership can read after the fact. The twelve sections below cover the durable shape of the artefact across ISO/IEC 27001 Annex A 5.24, A 5.26, and A 5.27, SOC 2 CC7.4 and CC7.5, PCI DSS Requirement 12.10.2, NIST SP 800-61 Rev. 2 Section 3.2, NIST SP 800-53 IR-2 and IR-3, HIPAA 164.308(a)(7), NIS2 Article 21, DORA Articles 25 and 26, FedRAMP, and HITRUST. Copy the section that fits your stage and paste the rest as you go.
The package is not a substitute for the operational incident response workflow that runs live incidents day to day, the playbook entries that govern each scenario class, or the evidence retention policy that classifies the records the exercise produces. Pair it with the incident response workflow for the live-incident operating model, the incident response plan guide for the underlying plan structure the tabletop tests, the audit evidence retention policy template for the retention class that governs the tabletop evidence pack, and the risk acceptance form template for the per-decision artefact the exercise sometimes surfaces (an accepted gap that has to land on the formal register rather than in the after-action report).
Copy the full exercise package (all twelve sections) as one block.
1. Exercise charter and scope
Open the package with the boundary and the authority. A reviewer should know in the first paragraph which scenario class the exercise covers, which estate is in scope, which audience attends, and which framework expectations the exercise evidences. ISO/IEC 27001 Annex A 5.24 and Clause 5.3 expect documented information security incident management with named authority; this opening section is what makes the tabletop traceable to the wider incident response programme rather than a one-off meeting.
Exercise title: {{EXERCISE_TITLE}}
Exercise reference: {{EXERCISE_REFERENCE}}
Exercise date and time: {{EXERCISE_DATE_AND_TIME}}
Duration: {{EXERCISE_DURATION}}
Location or platform: {{LOCATION_OR_VIDEO_PLATFORM}}
Scenario class (one of: ransomware, cloud control plane compromise, customer data breach, third-party breach, phishing-driven account takeover, insider misuse, source code or build system compromise, denial of service or extortion):
- {{SCENARIO_CLASS}}
Scenario summary (plain language, two to three sentences):
- {{SCENARIO_SUMMARY}}
Estate in scope:
- Business units, geographies, and tenants in scope: {{IN_SCOPE_BUSINESS_UNITS}}
- Systems and platforms referenced in the scenario: {{IN_SCOPE_SYSTEMS_AND_PLATFORMS}}
- Customer or third-party impact in scope: {{IN_SCOPE_CUSTOMER_OR_THIRD_PARTY_IMPACT}}
Audience and audience role:
- Participants attending as decision-makers: {{PARTICIPANT_DECISION_MAKERS}}
- Observers attending to score: {{OBSERVERS}}
- Scribes attending to capture: {{SCRIBES}}
- Facilitator: {{FACILITATOR}}
Framework expectations evidenced by this exercise (ISO/IEC 27001 Annex A 5.24, A 5.26, A 5.27; SOC 2 CC7.4 and CC7.5; PCI DSS Requirement 12.10.2; NIST SP 800-61 Rev. 2 Section 3.2; NIST SP 800-53 IR-2 and IR-3; HIPAA 164.308(a)(7); NIS2 Article 21; DORA Articles 25 and 26; FedRAMP; HITRUST; sector-specific overlays; internal policy):
- {{FRAMEWORK_EXPECTATIONS_LIST}}
Out of scope (explicit boundaries the facilitator will not allow the discussion to cross):
- {{OUT_OF_SCOPE_BOUNDARIES}}
- This is a discussion-based exercise. No live systems are touched, no production data is at risk, no customer-facing communications go out, and no real third parties are contacted.
Approving authority (executive sponsor who signed the exercise charter):
- Name: {{APPROVING_AUTHORITY_NAME}}
- Role: {{APPROVING_AUTHORITY_ROLE}}
- Approval date: {{APPROVAL_DATE}}
2. Roles and responsibilities
Name the people who carry the exercise. Tabletops that float without named roles drift the moment the discussion starts. ISO/IEC 27001 Clause 5.3 expects roles and authorities for the information security management system to be documented; the tabletop role assignment is the discrete artefact that operationalises that expectation for the exercise itself. Keep facilitation, scoring, and decision-making strictly separate so the facilitator does not pull the team toward a preferred answer and the observers can score the response rather than participate in it.
Exercise sponsor (executive authority who chartered the exercise; signs the after-action report):
- Name: {{SPONSOR_NAME}}
- Role: {{SPONSOR_ROLE}}
- Function: {{SPONSOR_FUNCTION}}
Facilitator (runs timing, releases injects, prompts decision points, stays neutral; rotated at least annually between an internal trained facilitator and an external party so the firm is not always running the exercise from the same chair):
- Name: {{FACILITATOR_NAME}}
- Role: {{FACILITATOR_ROLE}}
- Internal or external: {{FACILITATOR_INTERNAL_OR_EXTERNAL}}
Scribes (capture decisions, timing, and authority invoked in the decision capture template; do not participate in the discussion):
- Names: {{SCRIBE_NAMES}}
Observers (score against the rubric in Section 8; do not participate in the discussion):
- Names and role group: {{OBSERVER_NAMES_AND_ROLE_GROUPS}}
Participants by core response role:
- Incident commander (runs the response in the scenario; would run it in a real incident): {{INCIDENT_COMMANDER}}
- Security operations responders: {{SECURITY_OPERATIONS_RESPONDERS}}
- Infrastructure and platform engineering: {{INFRASTRUCTURE_AND_PLATFORM}}
- Application engineering for affected estate: {{APPLICATION_ENGINEERING}}
- Identity and cloud control plane operators: {{IDENTITY_AND_CLOUD_OPERATORS}}
- Communications lead: {{COMMUNICATIONS_LEAD}}
- Legal counsel (internal): {{INTERNAL_COUNSEL}}
- Legal counsel (external, where the scenario warrants): {{EXTERNAL_COUNSEL}}
- Privacy officer or DPO: {{PRIVACY_OFFICER}}
- Customer success or account representative: {{CUSTOMER_REPRESENTATIVE}}
- Executive escalation contact (CISO, CEO, or delegated authority): {{EXECUTIVE_ESCALATION_CONTACT}}
Scenario-specific stakeholders (added based on the scenario class):
- HR (insider misuse scenarios): {{HR_REPRESENTATIVE}}
- Finance (fraud or extortion scenarios): {{FINANCE_REPRESENTATIVE}}
- Sales and customer success (customer-facing impact scenarios): {{SALES_AND_CUSTOMER_SUCCESS}}
- Regulator-relations (notification scenarios): {{REGULATOR_RELATIONS}}
- Vendor management (third-party breach scenarios): {{VENDOR_MANAGEMENT}}
- OT or ICS operators (operational technology scenarios): {{OT_ICS_OPERATORS}}
Authority discipline reminder (read at the start of the exercise):
- Decisions are made by the role the playbook names, not by seniority in the room.
- The facilitator does not vote on decisions.
- The observers do not participate in decisions; they score against the rubric.
- The scribes do not participate in decisions; they capture the record.
3. Scenario selection criteria and eight-lane scenario library
Pick the scenario from a defensible library rather than re-running last year. A defensible library covers eight scenario lanes so the rotation does not lean on a single archetype, and selects intensity within the lane based on programme maturity, recent change in the estate, and external threat shifts. Rotate so the same scenario is not run twice in two years. Significant change (new business line, new regulatory geography, material acquisition, new core platform) is itself a re-test trigger.
Selection criteria for this cycle (the facilitator records why this scenario was selected; reviewers should be able to read the rationale without follow-up):
- Maturity of the response capability for this lane: {{MATURITY_FOR_THIS_LANE}}
- Recent estate change that this scenario stress-tests: {{RECENT_ESTATE_CHANGE}}
- External threat intelligence informing the lane: {{THREAT_INTELLIGENCE_INPUT}}
- Audit or regulatory pressure informing the lane: {{REGULATORY_PRESSURE}}
- Last time this lane was exercised (date and reference): {{LAST_LANE_EXERCISE_DATE}}
- Last time this specific scenario was exercised (date and reference): {{LAST_SCENARIO_EXERCISE_DATE}}
Eight-lane scenario library (rotate across cycles; the same scenario does not repeat within two years):
Lane 1: Ransomware affecting production estate.
- Scenarios at increasing intensity: workstation fleet only; workstation plus server fleet; production plus backup repository; production, backup, and OT or ICS estate where in scope.
- Decision pressure: containment versus availability, ransom decision authority, regulator notification, customer notification, public disclosure timing.
Lane 2: Cloud control plane compromise.
- Scenarios at increasing intensity: leaked IAM key for a low-privilege role; console session hijack on a privileged operator; root or organisation-level identity provider compromise; multi-tenant cloud account-level compromise affecting customer estates.
- Decision pressure: blast-radius assessment, identity provider isolation, MFA enforcement under stress, evidence preservation while rotating credentials.
Lane 3: Customer data breach with regulatory notification timelines.
- Scenarios at increasing intensity: limited-cohort PII exposure; large-cohort PII exposure with credit card data in scope; large-cohort PII exposure across multiple regulatory geographies (GDPR Article 33 seventy-two-hour clock, sector-specific notification windows, contractual customer notification obligations).
- Decision pressure: notification timing, scope determination, regulator coordination, customer messaging, contractual obligations.
Lane 4: Critical third-party breach.
- Scenarios at increasing intensity: a downstream vendor with limited data scope discloses publicly; a critical processor with active customer data discloses; a build system or signing authority discloses with potential supply-chain impact.
- Decision pressure: vendor coordination authority, data-impact assessment without vendor cooperation, customer notification independent of vendor timeline, regulator-grade disclosure decision.
Lane 5: Phishing-driven account takeover with downstream lateral movement.
- Scenarios at increasing intensity: single-account takeover detected by EDR; account takeover with privilege escalation and persistence; multi-account compromise with exfiltration of internal data.
- Decision pressure: containment without destroying evidence, identity provider posture decisions, communication to the affected employees, root-cause-analysis pressure.
Lane 6: Insider misuse or fraud.
- Scenarios at increasing intensity: data theft suspicion against a departing employee; financial fraud detected by anomaly detection on a privileged operator; coordinated insider misuse across multiple roles.
- Decision pressure: HR coordination, legal preservation requirements, evidence integrity over time, employee due process versus containment urgency.
Lane 7: Source code repository or build system compromise.
- Scenarios at increasing intensity: malicious dependency discovered in a non-critical product; signing key leak; build system compromise discovered upstream by a third party (the firm is the affected party rather than the discoverer).
- Decision pressure: customer notification on supply-chain risk, signing-authority rotation, deployment freeze, release rollback decision.
Lane 8: Denial of service or extortion against customer-facing platform.
- Scenarios at increasing intensity: volumetric DDoS against the public estate; targeted application-layer DDoS combined with extortion demand; sustained extortion campaign over multiple weeks.
- Decision pressure: extortion response authority, communication to customers, contractual obligations, regulator visibility on availability.
Selected scenario for this cycle:
- Scenario lane: {{SELECTED_LANE}}
- Scenario intensity level (within lane, lowest to highest): {{SELECTED_INTENSITY}}
- Scenario reference (numbered identifier in the library): {{SCENARIO_REFERENCE}}
- Scenario summary (the version that goes to participants in the read-ahead pack): {{SCENARIO_SUMMARY_FOR_READ_AHEAD}}
4. Pre-exercise read-ahead pack
Send the read-ahead pack five business days before the exercise so the participants come ready to make decisions rather than orient. The read-ahead is not a script, and it is not a hint at the scenario; it is the durable reference set the participants would reach for in a real incident. Send it once, capture acknowledgement, and avoid late additions that shift the discussion away from the scenario.
Read-ahead pack contents:
Incident response plan:
- Plan version and effective date: {{IR_PLAN_VERSION_AND_DATE}}
- Plan owner: {{IR_PLAN_OWNER}}
- Plan storage location: {{IR_PLAN_STORAGE_LOCATION}}
Playbook references that the scenario is likely to invoke:
- {{PLAYBOOK_REFERENCE_1}}
- {{PLAYBOOK_REFERENCE_2}}
- {{PLAYBOOK_REFERENCE_3}}
- {{PLAYBOOK_REFERENCE_4}}
- {{PLAYBOOK_REFERENCE_5}}
Runbook entries that the scenario may reach into:
- {{RUNBOOK_ENTRY_1}}
- {{RUNBOOK_ENTRY_2}}
- {{RUNBOOK_ENTRY_3}}
Contact tree:
- Internal escalation tree: {{INTERNAL_ESCALATION_TREE}}
- External escalation tree: {{EXTERNAL_ESCALATION_TREE}}
- Customer success contact tree: {{CUSTOMER_SUCCESS_CONTACT_TREE}}
- Vendor management contact tree: {{VENDOR_CONTACT_TREE}}
- Regulator-relations contact tree: {{REGULATOR_CONTACT_TREE}}
Communication templates the scenario is likely to draw on:
- Internal stakeholder briefing template: {{INTERNAL_BRIEFING_TEMPLATE_REFERENCE}}
- Customer notification template: {{CUSTOMER_NOTIFICATION_TEMPLATE_REFERENCE}}
- Regulator notification template: {{REGULATOR_NOTIFICATION_TEMPLATE_REFERENCE}}
- Media holding statement template: {{MEDIA_HOLDING_STATEMENT_REFERENCE}}
Prior incident summaries that pair with the scenario class:
- Reference 1 (date, scenario, outcome, action items): {{PRIOR_INCIDENT_REFERENCE_1}}
- Reference 2: {{PRIOR_INCIDENT_REFERENCE_2}}
Prior tabletop after-action reports for this lane:
- Reference 1 (date, scenario, key action items, closure status): {{PRIOR_TABLETOP_REFERENCE_1}}
- Reference 2: {{PRIOR_TABLETOP_REFERENCE_2}}
Read-ahead acknowledgement:
- Each participant acknowledges receipt and review by: {{ACKNOWLEDGEMENT_DEADLINE}}
- Acknowledgement is captured in: {{ACKNOWLEDGEMENT_RECORD_LOCATION}}
Out-of-band reading (background, optional):
- Recent industry incident reports relevant to the lane: {{INDUSTRY_INCIDENT_REPORTS}}
- Threat intelligence summaries: {{THREAT_INTELLIGENCE_SUMMARIES}}
- Regulatory updates relevant to the lane: {{REGULATORY_UPDATE_SUMMARIES}}
5. Exercise structure and timing
Design the structure to produce decision pressure rather than to fill the calendar. A defensible structure runs ninety minutes to four hours depending on intensity, with explicit transitions between briefing, scenario walkthrough, decision points, hot wash, and adjournment. Long runs with no transitions degrade into general discussion; short runs with no debrief leave the action items unspoken. Hold the timing tightly so the exercise produces a record that survives the day.
Total duration: {{TOTAL_DURATION}}
Format: in-person, hybrid, or fully remote: {{FORMAT}}
Recording (audio, video, or written): {{RECORDING_FORMAT}}
Recording retention: {{RECORDING_RETENTION_RULE}}
Phase 1: Pre-exercise check-in (10 minutes).
- Facilitator confirms attendance, role assignments, scribe and observer setup, and the discussion-only ground rule.
- Sponsor opens with one to two minutes on why this exercise, what is in scope, and what is out of scope.
Phase 2: Scenario opening brief (10 minutes).
- Facilitator releases the opening situation: detection event, immediate context, and the first decision pressure.
- Scribes start the decision capture timer.
- No injects yet.
Phase 3: First decision window (20 to 30 minutes).
- Participants orient against the playbook, declare the immediate decisions, and brief the scribe.
- Facilitator releases the first one to two technical injects on a fifteen-minute cadence.
- Observers score decision speed, authority discipline, and playbook fidelity.
Phase 4: Pressure cycle (30 to 60 minutes).
- Facilitator alternates technical injects (every fifteen to thirty minutes), business injects (customer pressure, regulator inquiry, internal executive pressure), and time injects (compressed clocks, deadlines).
- One or two discretionary injects are held by the facilitator and released only if the team is performing strongly.
- Decision points are explicit (the facilitator pauses and asks for the decision and the authority).
Phase 5: Recovery and disclosure decisions (20 to 40 minutes).
- Facilitator transitions to the post-containment phase: recovery decisions, disclosure decisions, customer messaging decisions.
- Observers score communication quality, evidence preservation, and continuous improvement.
Phase 6: Hot wash (15 minutes).
- Facilitator pauses the scenario.
- Each participant gives one strength they observed in the team, one gap they observed in the response, and one action item they would commit to before the next exercise.
- Scribes capture the hot wash verbatim into the action item ledger draft.
Phase 7: Adjournment (5 minutes).
- Facilitator closes; sponsor confirms the after-action report cadence (draft within five business days, comments within ten, signed final within fifteen).
- Observers schedule a thirty-minute debrief in the next two business days to align on rubric scores before the after-action report is drafted.
Total time budget: {{TIMING_TOTAL}}
6. Inject schedule
Design the inject schedule to mirror the decision pressure of a real incident. Layer technical, business, and time injects on separate clocks so the team experiences pressure on multiple axes at once. Rehearse the timing before the exercise so the cadence produces decision pressure rather than overwhelm. Hold one or two discretionary injects so the facilitator can pace to the team.
Inject schedule (the facilitator releases each at the planned moment; scribes record release time and team response time):
Inject 0 (T+0): Detection event opens the scenario.
- Source: {{INJECT_0_SOURCE}}
- Content: {{INJECT_0_CONTENT}}
- Expected decision pressure: initial triage, playbook reach, immediate escalation.
Inject 1 (T+15 to T+30): First technical inject.
- Source: {{INJECT_1_SOURCE}}
- Content: {{INJECT_1_CONTENT}}
- Expected decision pressure: scope assessment, containment posture, evidence preservation.
Inject 2 (T+30 to T+45): First business inject.
- Source: customer, internal executive, partner, or sales.
- Content: {{INJECT_2_CONTENT}}
- Expected decision pressure: brief cadence, message accuracy, authority on external communication.
Inject 3 (T+45 to T+60): Second technical inject (containment failure or escalation).
- Source: {{INJECT_3_SOURCE}}
- Content: {{INJECT_3_CONTENT}}
- Expected decision pressure: alternative containment path, vendor escalation, executive escalation.
Inject 4 (T+60 to T+75): Time inject (compressed clock).
- Source: deadline imposed by external party.
- Content: {{INJECT_4_CONTENT}}
- Expected decision pressure: prioritisation under time pressure, decision authority under uncertainty.
Inject 5 (T+75 to T+90): Regulator or legal inquiry.
- Source: regulator-relations or external counsel.
- Content: {{INJECT_5_CONTENT}}
- Expected decision pressure: notification authority, scope determination, statement preparation.
Inject 6 (T+90 to T+105): Recovery decision pressure.
- Source: business owner of the affected estate.
- Content: {{INJECT_6_CONTENT}}
- Expected decision pressure: recovery path, customer-facing messaging, evidence preservation through recovery.
Discretionary inject A (held; released only if the team is performing strongly):
- Source: {{INJECT_A_SOURCE}}
- Content: {{INJECT_A_CONTENT}}
Discretionary inject B (held; released only if the team is performing strongly):
- Source: {{INJECT_B_SOURCE}}
- Content: {{INJECT_B_CONTENT}}
Inject delivery format:
- Verbal narration with printed handout: {{VERBAL_AND_HANDOUT_INJECTS}}
- Simulated email or chat message: {{SIMULATED_EMAIL_INJECTS}}
- Simulated dashboard screenshot or telemetry capture: {{SIMULATED_TELEMETRY_INJECTS}}
- Phone call from observer (in-character): {{PHONE_CALL_INJECTS}}
Facilitator rehearsal:
- Inject schedule rehearsed with scribes and observers at: {{REHEARSAL_DATE}}
- Timing reviewed against the duration plan in Section 5.
- Discretionary injects flagged with release criteria.
7. Decision capture template
Capture each decision with the same structure so the after-action report is reproducible rather than reconstructed. The decision capture is the durable record of how the team operated under pressure; sloppy capture produces sloppy lessons. Hold scribes to the structure during the exercise rather than at the end.
Decision capture entry (one row per decision; scribes record in real time during the exercise):
Decision number: {{DECISION_NUMBER}}
Time elapsed (since T+0): {{TIME_ELAPSED}}
Inject reference (which inject prompted the decision; null if pre-emptive): {{INJECT_REFERENCE}}
Decision question (what did the team have to decide):
- {{DECISION_QUESTION}}
Decision made (what did the team decide):
- {{DECISION_MADE}}
Authority invoked (which role made the decision; which playbook page authorises the role to make this decision):
- Role making the decision: {{ROLE_MAKING_DECISION}}
- Playbook page reference: {{PLAYBOOK_PAGE_REFERENCE}}
- Authority chain (if escalated): {{AUTHORITY_CHAIN}}
Open question (what could the team not close in the room):
- {{OPEN_QUESTION}}
- Owner of follow-up: {{OPEN_QUESTION_OWNER}}
- Target close date: {{OPEN_QUESTION_TARGET_DATE}}
Evidence preservation note (what evidence did the team commit to preserve before taking the decision):
- {{EVIDENCE_PRESERVATION_NOTE}}
Communication committed (which stakeholders did the team commit to brief; on what cadence):
- {{COMMUNICATION_COMMITMENT}}
Observer notes (one or two sentences from the assigned observer, captured during the exercise):
- {{OBSERVER_NOTES}}
Capture format:
- Use one entry per decision; do not bundle multiple decisions into one row.
- Capture in real time; reconstruction after the fact is the most common audit gap.
- The scribe reads the entry back to the facilitator at the next inject release for accuracy confirmation.
Decision register (running list of all decisions taken in the exercise):
- Total decisions: {{TOTAL_DECISIONS}}
- Decisions inside playbook fidelity (the team reached the right playbook page): {{DECISIONS_INSIDE_PLAYBOOK}}
- Decisions outside playbook fidelity (the team invented a path): {{DECISIONS_OUTSIDE_PLAYBOOK}}
- Decisions deferred or escalated (the team did not close in the room): {{DECISIONS_DEFERRED}}
8. Observer scoring rubric
Score against the response, not against the people. A defensible rubric covers six dimensions on a four-point scale per dimension. The score is observable behaviour during the exercise; it is not a personal review and it is not made public. The score is the durable input to the after-action report and to the next cycle scenario selection.
Scoring scale (four-point):
- 4: Strong. The team operated against the documented response with discipline and produced evidence the audit will read favourably.
- 3: Adequate. The team produced a defensible response with minor gaps that the after-action report addresses with action items.
- 2: Developing. The team made the right decisions in some places and missed in others; structural fixes are required before the next exercise in this lane.
- 1: Weak. The team did not operate against the documented response in this dimension; structural intervention is required before the next exercise in any lane.
Dimension 1: Decision speed.
- Did the team make a defensible decision at each decision point inside the inject window.
- Score: {{SCORE_DECISION_SPEED}}
- Observer notes: {{NOTES_DECISION_SPEED}}
Dimension 2: Authority discipline.
- Did the team invoke the documented authority, escalate when authority was unclear, and avoid making decisions the playbook reserves to a different role.
- Score: {{SCORE_AUTHORITY_DISCIPLINE}}
- Observer notes: {{NOTES_AUTHORITY_DISCIPLINE}}
Dimension 3: Communication quality.
- Did the team brief stakeholders at the cadence the playbook expects, and did the brief carry the right facts without fabricated certainty.
- Score: {{SCORE_COMMUNICATION_QUALITY}}
- Observer notes: {{NOTES_COMMUNICATION_QUALITY}}
Dimension 4: Playbook fidelity.
- Did the team reach the playbook page that governs each decision, or did they invent a path that the playbook either contradicts or does not yet cover.
- Score: {{SCORE_PLAYBOOK_FIDELITY}}
- Observer notes: {{NOTES_PLAYBOOK_FIDELITY}}
Dimension 5: Evidence preservation.
- Did the team preserve the evidence the post-incident review will need, or did they take an action that destroyed evidence in the rush to contain.
- Score: {{SCORE_EVIDENCE_PRESERVATION}}
- Observer notes: {{NOTES_EVIDENCE_PRESERVATION}}
Dimension 6: Continuous improvement.
- Did the team capture the action items as they emerged, or did the action items have to be reconstructed from observer notes after the exercise.
- Score: {{SCORE_CONTINUOUS_IMPROVEMENT}}
- Observer notes: {{NOTES_CONTINUOUS_IMPROVEMENT}}
Aggregate score: {{AGGREGATE_SCORE}} of 24
Aggregate score interpretation:
- 22 to 24: programme is strong in this lane; rotate to a different lane next cycle.
- 18 to 21: programme is solid; address the dimension with the lowest score before re-running this lane.
- 12 to 17: programme has structural gaps; fix the lowest two dimensions before any further exercise.
- Below 12: programme is not yet ready for this lane intensity; drop to a lower-intensity scenario in the same lane and rebuild.
Observer alignment debrief:
- Observers meet within two business days of the exercise to align on rubric scores before the after-action report is drafted.
- Disagreements between observers are documented and resolved by averaging or by escalation to the sponsor.
- The aggregate score is the input to the after-action report; individual rubric notes feed the lessons learned in Section 9.
9. After-action report template
The after-action report is the durable artefact that survives the exercise. It is not the meeting notes; it is the durable record that the audit, the leadership, and the next-cycle facilitator will read. Most enterprise programmes circulate a draft within five business days, capture comments within ten, and publish the signed final within fifteen so the action items are open well before the next cycle.
Report identification:
- Exercise reference: {{EXERCISE_REFERENCE}}
- Exercise date: {{EXERCISE_DATE}}
- Report version: {{REPORT_VERSION}}
- Report owner: {{REPORT_OWNER}}
- Distribution list: {{DISTRIBUTION_LIST}}
Section 1. Exercise summary.
- Scenario class and intensity: {{SCENARIO_CLASS_AND_INTENSITY}}
- Scenario summary (two to three sentences): {{SCENARIO_SUMMARY}}
- Participants by role (counts and named lead per role): {{PARTICIPANTS_BY_ROLE}}
- Framework expectations evidenced: {{FRAMEWORK_EXPECTATIONS}}
Section 2. Timeline of injects and decisions.
- Reproduce the inject schedule from Section 6 with actual release times.
- Reproduce the decision register from Section 7 with timing.
- Time-to-first-decision after Inject 0: {{TIME_TO_FIRST_DECISION}}
- Average decision interval across the exercise: {{AVERAGE_DECISION_INTERVAL}}
Section 3. Observed strengths.
- Three to five strengths the team demonstrated, supported by observer notes and rubric scores.
- Strength 1: {{STRENGTH_1}}
- Strength 2: {{STRENGTH_2}}
- Strength 3: {{STRENGTH_3}}
- Strength 4: {{STRENGTH_4}}
- Strength 5: {{STRENGTH_5}}
Section 4. Observed gaps.
- Three to five structural gaps in the response, supported by observer notes and rubric scores; not personal performance.
- Gap 1 (with rubric dimension and severity): {{GAP_1}}
- Gap 2: {{GAP_2}}
- Gap 3: {{GAP_3}}
- Gap 4: {{GAP_4}}
- Gap 5: {{GAP_5}}
Section 5. Action items.
- Numbered list of action items emerging from the exercise; pair the format to Section 10.
- Each item carries: action, owner, target close date, expected evidence of closure, severity (critical, high, medium, low).
Section 6. Lessons learned for the playbook and the plan.
- Concrete edits committed for the incident response plan: {{PLAN_EDITS}}
- Concrete edits committed for the playbooks: {{PLAYBOOK_EDITS}}
- Concrete edits committed for the runbooks: {{RUNBOOK_EDITS}}
- Concrete edits committed for the communication templates: {{COMMUNICATION_TEMPLATE_EDITS}}
- Concrete edits committed for the escalation tree: {{ESCALATION_TREE_EDITS}}
- Concrete edits committed for the training programme: {{TRAINING_EDITS}}
- Concrete edits committed for the tooling: {{TOOLING_EDITS}}
Section 7. Acknowledgements and sign-off.
- Participants acknowledge the report (signature or attestation): {{PARTICIPANT_ACKNOWLEDGEMENTS}}
- Incident response plan owner signs off on action item commitments: {{PLAN_OWNER_SIGNATURE}}
- Sponsor signs the report at publication: {{SPONSOR_SIGNATURE}}
Cadence:
- Draft circulated within: {{DRAFT_DEADLINE}}
- Comments captured within: {{COMMENTS_DEADLINE}}
- Signed final published within: {{FINAL_DEADLINE}}
- Action items active in the workspace before: {{ACTION_ITEMS_ACTIVE_DATE}}
10. Action item ledger
Treat tabletop action items the same way the firm treats vulnerability findings. Every action item has an owner, a target close date, a severity, a closure evidence requirement, and a persistent reference that survives team changes. The tabletop programme is judged on closure rate against the previous cycle, not on the count of items raised; high-quality programmes raise fewer items each cycle as the playbook matures, and the items that do emerge close before the next exercise.
Action item entry (one per item; persistent in the workspace alongside vulnerability findings):
Action item ID: {{ACTION_ITEM_ID}}
Source exercise reference: {{SOURCE_EXERCISE_REFERENCE}}
Source rubric dimension (which observer dimension surfaced the item): {{SOURCE_RUBRIC_DIMENSION}}
Source decision register entry (which decision surfaced the item): {{SOURCE_DECISION_REGISTER_ENTRY}}
Action item description (concrete, owner-actionable; not a slogan):
- {{ACTION_ITEM_DESCRIPTION}}
Owner (named individual; not a team):
- {{ACTION_ITEM_OWNER}}
Severity (mirrors finding severity logic):
- Critical: would block recovery in a real incident.
- High: would slow recovery materially.
- Medium: would reduce response quality.
- Low: would polish the playbook or training.
- Selected severity: {{SEVERITY}}
Target close date: {{TARGET_CLOSE_DATE}}
- Critical and high items close before the next exercise in any lane.
- Medium items close before the next exercise in this lane.
- Low items close on the standard backlog cadence.
Expected evidence of closure (the artefact the closure review will read):
- {{CLOSURE_EVIDENCE_DESCRIPTION}}
Closure record:
- Closed (true or false): {{CLOSED}}
- Closure date: {{CLOSURE_DATE}}
- Closure approver: {{CLOSURE_APPROVER}}
- Closure evidence reference: {{CLOSURE_EVIDENCE_REFERENCE}}
Carry-over flag (true if the item survived from the previous cycle without closure):
- {{CARRY_OVER_FLAG}}
- Carry-over reason (structural blocker, deprioritisation, owner change): {{CARRY_OVER_REASON}}
- Escalation triggered (true or false): {{CARRY_OVER_ESCALATION}}
Programme-level metrics on the ledger:
- Items raised in this cycle: {{ITEMS_RAISED}}
- Items closed before next exercise: {{ITEMS_CLOSED}}
- Closure rate: {{CLOSURE_RATE}}
- Carry-over rate from previous cycle: {{CARRY_OVER_RATE}}
- Trend across cycles (improving, flat, deteriorating): {{TREND}}
11. Governance review cadence
Two cadences operate in parallel. Performance review against the tabletop programme is monthly to the security operations leader, quarterly to the audit committee, and annually to the full board. Performance review answers whether the programme is hitting the cadence, the lane coverage, and the action item closure rate the policy publishes. Programme review against the underlying environment is at least annual and triggered by material change.
Performance review cadence:
- Monthly to security operations leader (action item closure status, upcoming exercise schedule, cadence drift): {{MONTHLY_REVIEWER}}
- Quarterly to audit committee (programme cadence, lane coverage rate, framework expectations evidenced, action item closure rate, carry-over rate): {{QUARTERLY_REVIEWER}}
- Annually to full board (programme maturity trend, cross-cycle improvement, framework attestation readiness): {{ANNUAL_REVIEWER}}
Programme review cadence:
- At least annual review of the eight-lane scenario library against the underlying environment.
- Triggered review when significant change occurs (new business line, new regulatory geography, material acquisition, new core platform, material litigation event, major external incident in the sector that reshapes the threat picture).
- Review owner: {{PROGRAMME_REVIEW_OWNER}}
- Review evidence retained: {{PROGRAMME_REVIEW_EVIDENCE_RETENTION}}
Performance metrics the audit committee receives quarterly:
- Number of exercises run in the quarter: {{EXERCISES_PER_QUARTER}}
- Lane coverage in the rolling twelve months (which lanes have been exercised): {{LANE_COVERAGE_LAST_12}}
- Aggregate rubric score trend across the rolling twelve months: {{AGGREGATE_SCORE_TREND}}
- Action item closure rate against target: {{ACTION_ITEM_CLOSURE_RATE}}
- Carry-over rate from prior cycles: {{CARRY_OVER_RATE_REPORTED}}
- Significant change events in the quarter that triggered programme review: {{TRIGGERED_REVIEWS}}
Programme metrics the full board receives annually:
- Programme maturity trend across the rolling thirty-six months.
- Lane coverage rate across the rolling twenty-four months.
- Framework attestation readiness (PCI DSS 12.10.2, ISO 27001 A 5.27, SOC 2 CC7.4 and CC7.5, NIST 800-53 IR-3, sector overlays).
- Average time-to-first-decision and average decision interval across the rolling twelve months.
- Notable cross-cycle lessons learned that landed in the response plan or in tooling.
Audit-readable artefacts retained per cycle:
- Exercise charter: {{CHARTER_RETAINED_LOCATION}}
- Inject schedule with timing record: {{INJECT_SCHEDULE_RETAINED_LOCATION}}
- Decision register: {{DECISION_REGISTER_RETAINED_LOCATION}}
- Observer rubric scores with notes: {{RUBRIC_RETAINED_LOCATION}}
- After-action report (signed final): {{AFTER_ACTION_REPORT_RETAINED_LOCATION}}
- Action item ledger entries with closure evidence: {{ACTION_ITEM_LEDGER_RETAINED_LOCATION}}
- Recording (where applicable): {{RECORDING_RETAINED_LOCATION}}
- Acknowledgement records: {{ACKNOWLEDGEMENT_RETAINED_LOCATION}}
12. Evidence pack and audit trail
Sign the exercise charter at publication and the after-action report at publication. The signature trail and the evidence pack are what make the tabletop defensible at audit; an exercise without a signed evidence pack is treated as informal training regardless of how seriously it was run on the day. Hold the evidence pack on the same workspace as the rest of the security record so the audit read of incident response testing and the operational read are the same record.
Exercise evidence pack contents:
Exercise charter (signed at publication):
- Sponsor signature: {{SPONSOR_SIGNATURE}}
- Sponsor signature date: {{SPONSOR_SIGNATURE_DATE}}
- Charter version: {{CHARTER_VERSION}}
- Charter storage location: {{CHARTER_STORAGE_LOCATION}}
Read-ahead pack acknowledgement records (per participant, per exercise):
- Acknowledgement records storage location: {{READ_AHEAD_ACKNOWLEDGEMENT_LOCATION}}
Inject schedule with actual release times:
- Schedule version: {{INJECT_SCHEDULE_VERSION}}
- Storage location: {{INJECT_SCHEDULE_STORAGE_LOCATION}}
Decision register (full record from Section 7):
- Storage location: {{DECISION_REGISTER_STORAGE_LOCATION}}
- Cross-reference to action item ledger: {{DECISION_REGISTER_CROSS_REFERENCE}}
Observer rubric scores and notes:
- Storage location: {{RUBRIC_STORAGE_LOCATION}}
- Aligned across observers (yes or no, with disagreement record): {{RUBRIC_ALIGNED}}
After-action report (signed final):
- Sign-offs (participants, plan owner, sponsor): {{AAR_SIGN_OFFS}}
- Sign-off dates: {{AAR_SIGN_OFF_DATES}}
- Report version: {{AAR_VERSION}}
- Storage location: {{AAR_STORAGE_LOCATION}}
Action item ledger:
- Items raised: {{LEDGER_ITEMS_RAISED}}
- Items closed before next exercise: {{LEDGER_ITEMS_CLOSED}}
- Carry-over items with escalation status: {{LEDGER_CARRY_OVER}}
- Storage location: {{LEDGER_STORAGE_LOCATION}}
Recording (audio, video, or written; subject to recording policy):
- Format: {{RECORDING_FORMAT}}
- Retention rule (per audit evidence retention policy): {{RECORDING_RETENTION_RULE}}
- Storage location: {{RECORDING_STORAGE_LOCATION}}
Audit trail capture:
- Exercise execution recorded in the activity log with timestamps and named participants.
- After-action report version history retained.
- Action item ledger entries paired to the source exercise reference and to the closure evidence.
- Read access to the evidence pack restricted to the role groups named in Section 11.
Cross-references to the wider programme:
- Audit evidence retention policy version this exercise sits under: {{RETENTION_POLICY_VERSION}}
- Vulnerability management programme record this exercise informs: {{VM_PROGRAMME_RECORD}}
- Compliance framework records this exercise evidences: {{COMPLIANCE_FRAMEWORK_RECORDS}}
- Security leadership reporting cycle this exercise feeds: {{LEADERSHIP_REPORTING_CYCLE}}
Acknowledgement of compliance:
- The exercise sits under the audit evidence retention policy classification for incident records.
- The action item ledger sits alongside the vulnerability findings ledger so the leadership read of action item closure rate and finding closure rate are one query.
- The framework expectations evidenced by this exercise are mapped to the compliance tracking record and exported alongside the rest of the framework evidence.
Six failure modes the package has to design against
The tabletop programme fails the audit read in recognisable patterns. Each failure has a structural fix that the template above is designed to enforce. Read this list before you customise the template so the customisation does not weaken the discipline that makes the tabletop defensible.
Same scenario every year
The programme runs a ransomware scenario in year one, slightly larger ransomware in year two, and ransomware again in year three. The team learns the scenario rather than the response capability, and the lanes the firm has not exercised quietly accumulate gaps. The fix is the eight-lane library in Section 3 with a rotation rule that prevents the same scenario being run within two years.
Facilitator pulls the team to the answer
The facilitator is also the security operations leader and silently steers the team toward the response that they want to see. The exercise produces a clean record and a leadership briefing that does not match how the team would perform under real pressure. The fix is rotating facilitation between an internal trained facilitator and an external party (red team, legal counsel, vCISO) at least annually, plus the authority discipline reminder in Section 2.
Decision capture reconstructed from memory
The scribes do not capture in real time. After the exercise, the facilitator drafts the decision register from memory and from a few notes. The after-action report misses the timing record entirely, and the audit cannot read time-to-first-decision or average decision interval. The fix is the structured decision capture template in Section 7 with read-back to the facilitator at each inject release.
Action items raised but not closed
Each cycle raises a long list of action items; the next cycle starts with the same list. The exercise becomes a ritual that produces lists rather than a programme that improves the response. The fix is the action item ledger in Section 10 that tracks items the same way the firm tracks vulnerability findings, with severity-aligned target close dates and a closure rate metric the audit committee reads quarterly.
Recording without retention rule
The exercise is recorded; nobody decides how long the recording is retained, who can access it, or under what authority it is disposed. Recordings accumulate in shared storage indefinitely. The fix is naming the recording retention rule in Section 5 and Section 12, anchored to the audit evidence retention policy classification for incident records.
Tabletop runs as informal training rather than as governance evidence
No charter is signed. No after-action report is signed. No participants acknowledge. The audit asks for the tabletop record and receives a slide deck rather than a defensible evidence pack. The fix is the signed charter in Section 1, the signed after-action report in Section 9, the participant acknowledgement records in Section 12, and the audit-readable artefact retention list in Section 11.
Ten questions the quarterly governance review has to answer
Operational review keeps the programme on top of cadence drift, lane coverage, and action item closure. Governance review answers whether the programme is delivering durable response capability or accumulating gaps the audit will read as testing-on-paper. Run these ten questions at every quarterly review and capture the answers in the governance record.
1.How many tabletop exercises did the programme run in the rolling twelve months, and which of the eight lanes were exercised.
2.What was the aggregate rubric score per cycle, and is the trend line across cycles improving, flat, or deteriorating.
3.What is the action item closure rate against target, and how does it compare to the previous twelve months.
4.How many action items carried over from the previous cycle without closure, and what was the structural blocker.
5.What was the average time-to-first-decision after the opening inject, and how does that compare to the playbook expectation.
6.How many decisions in the period landed inside playbook fidelity versus outside, and what does the outside-fidelity pattern indicate about the playbook.
7.How many exercises required external facilitation, and what did the external facilitator surface that the internal facilitator had been missing.
8.How many significant change events in the period triggered an unscheduled exercise or programme review, and was the trigger followed.
9.How does the programme cadence map to PCI DSS 12.10.2, ISO 27001 A 5.27, SOC 2 CC7.4 and CC7.5, NIST 800-53 IR-3, and any sector-specific overlays in scope.
10.Which lanes are due for exercise in the next twelve months, and what scenarios in those lanes have not been run within the rolling twenty-four months.
How the package pairs with SecPortal
The template above is copy-ready as a standalone artefact. If your team already runs incident records, document custody, and finding tracking on a workspace, the tabletop performance becomes a byproduct of the work rather than a separate evidence project. SecPortal pairs every tabletop session to a versioned engagement record through engagement management, so the charter, scenario reference, participant list, and framework expectations the exercise evidences live alongside the rest of the engagement record rather than scattered across a calendar invite, a slide deck, and a shared drive.
The document management feature holds the read-ahead pack, the inject schedule, the after-action report, and the scenario library. Access to each document is gated by role-based access control through team management and protected by multi-factor authentication. The activity log captures the timestamped chain of state changes by user with 30, 90, or 365-day retention windows depending on the plan, so the read, edit, and disposition history of every tabletop artefact is observable rather than asserted. Those activity records are themselves an evidence class governed by the audit evidence retention policy.
Action items emerging from the exercise live alongside vulnerability findings on the same workspace through findings management, each carrying a CVSS-aligned severity, a named owner, a target close date, and an evidence-of-closure requirement. The compliance tracking feature maps findings and the parent engagement to ISO 27001, SOC 2, Cyber Essentials, PCI DSS, and NIST frameworks with CSV export, so when an auditor asks how the firm tests the incident response plan, the after-action report and the action item ledger are one query against the same record. The AI report generation workflow produces a draft after-action report and a leadership summary from the same engagement data so the audit committee read of tabletop performance and the operational read are the same record rather than two independently edited documents that diverge between reporting cycles.
For the operational workflow that runs incidents day to day, see the incident response use case and the security leadership reporting workflow. For the underlying plan structure, see the incident response plan guide and the enterprise incident response at scale guide. For the operating playbook that walks through audience selection, inject design, decision capture, observer scoring, after-action discipline, and the cadence that turns one-off sessions into a programme, see the incident response tabletop exercise guide. The framework anchors live alongside their parent pages in ISO 27001, SOC 2, PCI DSS, and NIST SP 800-53. SecPortal does not facilitate the tabletop, run the exercise, simulate attacks against live systems, or track live response decisions in real time during the exercise itself. The facilitator and the participants run the exercise; SecPortal carries the durable evidence the audit will read.