Templates14 min read

Pentest Kickoff Meeting Agenda: A Practical Template

The kickoff is the cheapest hour in a penetration test and the highest leverage. It runs after the statement of work is signed and before any traffic hits a target. Done well, it turns a contract into a runnable engagement: scope confirmed, accounts provisioned, allowlists applied, escalation paths agreed, deliverables and dates locked in. Done poorly, the first two days of the engagement evaporate into Slack threads about test accounts and missing IPs. This guide gives a complete kickoff agenda you can adapt, the attendee list that keeps the call short, the decisions that need to land in writing, and the common mistakes that turn a sixty-minute meeting into a week of rework.

Why The Kickoff Decides The First Two Testing Days

Most penetration tests do not start at the start. They start when the test account is issued, the source IP is allowlisted at the WAF, the engineering on-call knows the tester is firing, and the security owner knows where to send a critical finding at midnight. Every one of those items is a kickoff item. Skip the kickoff and they re-emerge as blocking issues during testing, when both sides are short on time and the engagement clock is already running.

The kickoff is also the first structured conversation between the people who will actually do the work on both sides. Procurement and legal are gone; the engagement lead and the engineering lead are now in a room together. Treating the kickoff as a routine ceremony rather than a working session is the most common reason engagements feel rushed in week one.

For consultancies, a disciplined kickoff also feeds the next renewal: a buyer who saw a calm, prepared start almost always remembers it when the next assessment cycle comes around. For internal security teams running a vendor, a structured kickoff is the cheapest signal that the vendor knows what they are doing.

Sample 60-Minute Pentest Kickoff Agenda

The agenda below is the practical default for a focused single-application engagement. Multi-stream engagements (web plus internal, web plus mobile, multiple applications) usually need ninety minutes; the structure stays the same and each section gets slightly more time.

TimeSectionOutcome
0 to 5 minIntroductions and rolesEveryone in the call knows who owns what.
5 to 15 minScope confirmationIn-scope and out-of-scope assets verified against the SOW.
15 to 25 minRules of engagementWindow, IPs, prohibited techniques, escalation paths agreed in writing.
25 to 35 minEnvironment and test dataTest accounts, environment URLs, allowlist status, monitoring posture.
35 to 45 minCommunication planChannels, cadence, critical-finding notification, named contacts with phone numbers.
45 to 55 minTimeline and deliverablesTest dates, draft and final report, debrief, retest window.
55 to 60 minOpen items and ownersOutstanding dependencies captured with named owners and dates.

1. Introductions and Roles (5 minutes)

Keep introductions short. The goal is not a tour through career history; it is to make sure that when the engagement is running, the right person can be reached for the right decision. A two-line introduction per attendee covering name, role, and which decisions they own is enough.

Capture the role each person plays in the engagement, not the role on their LinkedIn. For example: who approves a scope change, who can grant a new test account, who can extend the testing window, who is the buyer-side critical-finding contact out of hours. This is the moment to surface the answer to the question that costs engagements the most time: who do I call at 9pm on a Friday if I find a critical?

2. Scope Confirmation (10 minutes)

Read the in-scope and out-of-scope lists from the SOW out loud. Confirm each one with the engineering lead. This sounds redundant; it is not. Scope written six weeks ago, negotiated through legal, and signed by procurement is not always scope the engineering team has seen. Confirming scope live is the cheapest way to surface a forgotten staging environment, a microservice that shipped after the SOW was drafted, or a third-party integration that the buyer cannot legally authorise testing against.

Cross-check the asset inventory against the live environment. If the SOW lists 14 web applications and the engineering lead can name 16, decide in the meeting whether the extra two are in scope, out of scope, or move to a follow-on engagement. Document the decision with a single line in the kickoff record. For the underlying reason this kind of drift happens, see the research on pentest scope creep.

For engagements where scope was estimated rather than enumerated, this is also the moment to validate effort. The pentest scoping calculator gives a defensible basis of estimate; mismatches surfaced in the kickoff are still cheap to handle as a change order, but expensive to handle on day three of testing.

3. Rules of Engagement (10 minutes)

The rules of engagement protect both sides legally and operationally. Confirm each element against the SOW and the standalone rules of engagement template if you use one.

  • Testing window: exact start and end dates, time zones, allowed hours, and any blackout periods (release nights, code freezes, quarter-end finance windows).
  • Source IPs and user agents: the IP ranges the tester will use, allowlisted at any WAF, IPS, or rate limiter. If the buyer has a bot management product, it is allowlisted now, not when a 403 starts blocking traffic on day one.
  • Prohibited techniques: denial of service, destructive payloads, social engineering of staff, phishing of customers, ransomware-style proof, anything that touches third-party assets without authorisation.
  • Authorised techniques that need notice: credential stuffing on accounts the buyer has authorised, manipulation of production data, exploitation that could cause material business impact.
  • Authorisation: the engagement-specific authorisation paragraph from the SOW, named to the people on the call.
  • Third-party assets: any in-scope asset hosted by a cloud provider, payment processor, or SaaS vendor where written authorisation is required. Confirm the letters are filed, not promised.

4. Environment and Test Data (10 minutes)

Most engagement-day-one delays come from this section. Run through it explicitly.

  • Environment URLs: production, staging, or dedicated test environment. Confirm parity if the test target is not production. Differences in versions, feature flags, or third-party integrations between staging and production are the leading reason a critical found in test cannot be reproduced in prod.
  • Test accounts: one account per role tested, plus spares for lockout and reset scenarios. Confirm the credentials hand-off mechanism (encrypted message, secrets vault, portal credential storage) and confirm accounts are provisioned now, not before testing starts. SecPortal stores credentials with encrypted credential storage so the hand-off is auditable.
  • Test data: fixtures the tester can use, data the tester must not exfiltrate, and the line between the two. For health, financial, or personal data, this is where the data-handling rules from the SOW get re-stated in practical terms.
  • Monitoring posture: whether the security or operations teams will be alerted by the testing activity, whether a tester banner is expected (a header or query parameter so the SOC can identify test traffic), and whether the engagement is open or blind to the SOC.
  • Allowlist status: at the WAF, the load balancer, the rate limiter, the bot management product, and any geo-blocking layer. Confirm tested with a quick traceroute or known-good request before the meeting closes.
  • Out-of-band hand-offs: if the engagement requires VPN access, a bastion host, or an SSO bypass, schedule the access provisioning before the kickoff ends.

5. Communication Plan (10 minutes)

The communication plan is the layer that decides whether the engagement feels calm or chaotic. Confirm channel, cadence, and named contacts for each scenario.

  • Daily check-in: brief written update at the end of each testing day, in the agreed channel, summarising progress, blockers, and any high-severity findings flagged but not yet written up.
  • Critical-finding notification: the path for a CVSS 3.1 critical or high finding. Specify the channel, the named contact, the backup contact, and the response time the SOW commits to. Phone number, not email.
  • Out-of-hours protocol: what the tester does when they hit something material outside business hours. Default is to record the finding, pause exploitation that may cause harm, and notify on the next business morning unless the SOW specifies otherwise.
  • Status meetings: midpoint check-in (typically end of day three of a five-day engagement), draft report walkthrough, debrief.
  • Escalation contacts: a named buyer-side contact, a backup, and a final escalation, each with phone numbers. Same on the provider side.
  • Quiet channel for sensitive findings: a private channel, not the main engagement channel, for findings that should not be visible to the wider buyer-side team until the security owner has triaged. This is the channel where suspected data breaches, hard-coded production credentials, and findings implicating named individuals get raised first.

6. Timeline and Deliverables (10 minutes)

Walk the timeline as agreed in the SOW and confirm dates with the people who actually need to be available. A draft report walkthrough scheduled when the engineering lead is on annual leave is the kind of avoidable miss the kickoff exists to catch.

  • Testing window: start date, end date, any exclusion days, and the named tester(s) on each day.
  • Midpoint check-in: typically the end of day three of a five-day engagement, or the end of week one for longer engagements.
  • Draft report: delivery date, review window (typically three to five business days), and the channel for review comments.
  • Final report: delivery date, format (PDF plus portal record), and recipients. For the structure of the report itself, see the pentest report template and the executive summary guide.
  • Debrief meeting: who attends, what gets presented, and the format (live walkthrough, recorded, or both). For the bookend call after the test, see the pentest debrief meeting guide.
  • Retest window: opening date, closing date, and the channel to request a retest. For the operational pattern, see the pentest retesting workflow.

7. Open Items and Owners (5 minutes)

Walk the running list of open items, assign each one a named owner, and set a date (usually within twenty-four to forty-eight hours of the kickoff). Anything not on this list at the end of the kickoff is presumed resolved. Anything on the list and not owned is the leading indicator of a delayed start.

Common open items: a missing third-party authorisation letter, an extra environment URL the engineering lead surfaced during scope confirmation, a test account that needs an additional role, a quiet-channel set-up for sensitive findings, and any change orders the kickoff has surfaced. Treat each one as a small ticket with an owner and a due date, not as a vague action.

Who Should Attend

SideRoleWhy they are there
BuyerSecurity or programme ownerAccountable for the engagement; owns scope changes and escalation.
BuyerEngineering leadOwns operational knowledge of the in-scope systems and test accounts.
BuyerInfrastructure or platform contactOwns allowlists, WAF rules, monitoring posture, and environmental access.
BuyerCompliance owner (if applicable)Confirms framework alignment requirements (PCI DSS, ISO 27001, SOC 2).
ProviderEngagement leadOwns delivery; is the named contact for scope, timeline, and escalation.
ProviderLead testerOwns hands-on testing; needs the operational detail captured live.
ProviderProject or delivery managerOptional. Tracks open items and owns the kickoff record after the call.

Procurement, legal, and account management are not in the kickoff. Their work was finished at SOW signature; including them dilutes the call and slows decisions.

What to Send Before the Kickoff

A kickoff that requires the agenda to be explained from scratch is forty-five minutes of operational time gone. Send the following to all attendees at least forty-eight hours before the meeting:

  • The signed SOW with the asset inventory annex
  • The rules of engagement document or its equivalent SOW section
  • The draft test plan that decomposes the SOW scope into the methodology categories, schedule, and assignments the team will execute. The pentest test plan template produces a copy-ready starting point; the kickoff is where the client representative acknowledges the plan against the executed contract documents.
  • The agenda with sections, timings, and pre-meeting questions
  • A pre-kickoff checklist of items to confirm, gather, or provision
  • The list of attendees and their roles in the engagement
  • The escalation path template to be filled in during the call

For consultancies running multiple engagements in parallel, the pre-kickoff pack should be a template, not a fresh write each time. See managing multiple security engagements for the broader operating model.

What the Kickoff Record Must Capture

The kickoff record is the operational layer that sits on top of the SOW. Circulate within twenty-four hours of the call with these elements as a minimum.

  • Confirmed in-scope and out-of-scope assets, with any deltas from the SOW noted
  • Confirmed testing window with allowed hours and blackout periods
  • Allowlisted source IPs, user agents, and tester banners
  • Test account list with role, environment, and provisioning owner
  • Escalation contacts on both sides, with phone numbers and backups
  • Critical-finding notification path and response time commitment
  • Communication channels (daily check-in, status updates, sensitive-finding channel)
  • Confirmed dates for midpoint, draft report, debrief, and retest window
  • Open items with named owners and due dates
  • Authorisation status for any third-party-hosted assets

For broader pre-engagement onboarding (welcome packs, NDAs, branded portal provisioning), see the client onboarding workflow. For engagement orchestration after the kickoff, see pentest project management.

Common Pentest Kickoff Mistakes

  • Rerunning the scoping call: if scope is being renegotiated in the kickoff, the SOW was not really finished. Treat unresolved scope as a change order and route through the SOW process, not the kickoff agenda.
  • No engineering lead in the room: the security owner alone cannot answer questions about test accounts, environment parity, or allowlist mechanics. The kickoff stalls or runs on assumptions that fail on day one.
  • Test accounts as a follow-up: "we will send the credentials later today" is the leading reason engagements lose day one. Provision in advance and confirm working access during the kickoff.
  • No phone numbers for escalation: the moment the tester finds a critical at 9pm and the only contact is a security@ inbox, the communication plan has failed. Phone numbers are mandatory; a Slack handle is not.
  • Allowlist promised, not tested: the WAF or bot manager that says "you are allowlisted" in writing but blocks at the edge in practice is a familiar story. Run a known-good request from the tester source IP before the call ends.
  • Critical-finding path agreed verbally: the path should be in the kickoff record, named, with a phone number and a response time. A verbal agreement collapses under stress.
  • No quiet channel for sensitive findings:hard-coded production secrets, suspected data breaches, and findings implicating named individuals need a private channel, not the main engagement room. Set it up at kickoff or do not set it up in time.
  • Missing third-party authorisation: the cloud provider, payment processor, or SaaS vendor that hosts an in-scope asset and has not been notified is a legal exposure for both sides. Confirm letters are filed, not promised.
  • No kickoff record: a kickoff that lives only in memory is a kickoff that gets relitigated on day three of testing. The record is the cheap layer that prevents that.

When the Engagement is Compliance-Driven

For engagements driven by a compliance framework, the kickoff should also confirm framework-specific evidence requirements. The auditor needs the report to map cleanly to controls; the time to confirm that mapping is now, not at draft delivery.

  • PCI DSS engagements: confirm the cardholder data environment scope, segmentation testing requirements, and the evidence format the QSA expects.
  • ISO 27001 engagements: confirm the Annex A controls in scope and how findings will map to the statement of applicability.
  • SOC 2 engagements: confirm which trust service criteria the test supports and whether the auditor expects the report as-is or with criteria mapping.
  • Cyber Essentials Plus engagements: confirm the assessor relationship, the in-scope asset boundary, and evidence handover format.
  • CREST-aligned engagements: confirm the methodology declaration and tester certification requirements that the buyer's programme expects.

Pentest Kickoff Quick Checklist

  1. Pre-kickoff pack sent at least 48 hours in advance
  2. Scope confirmed asset by asset against the SOW
  3. Rules of engagement re-stated and any deltas captured
  4. Source IPs allowlisted and verified with a known-good request
  5. Test accounts provisioned across required roles
  6. Escalation contacts named with phone numbers on both sides
  7. Critical-finding notification path documented with response time
  8. Daily check-in channel and cadence agreed
  9. Sensitive-finding quiet channel set up
  10. Midpoint, draft report, debrief, and retest dates locked in calendars
  11. Compliance framework mapping confirmed if relevant
  12. Open items captured with named owners and due dates
  13. Kickoff record circulated within 24 hours

From Kickoff to Delivery on a Single Engagement Record

The kickoff is one phase of a longer engagement record that also covers testing, findings management, retests, reporting, and invoicing. Keeping the kickoff inside the engagement record (rather than as a deck on a shared drive) means the SOW, the rules of engagement, the kickoff decisions, the findings, the retest verifications, and the final report all live in one place and reference one source of truth.

SecPortal links the kickoff record to the live engagement on a tenant-branded portal: engagement management stores scope and dates, findings management stores findings with CVSS, AI reports generate the deliverable from live data, and the branded client portal delivers it to the buyer alongside the kickoff record. The retest later in the engagement uses the same record, which is why the operational pattern in the retesting workflow depends on a clean kickoff to start.

Frequently Asked Questions About Pentest Kickoffs

Run kickoff and delivery off the same engagement record

SecPortal stores the SOW, kickoff decisions, findings, retests, AI-generated reports, and invoicing on a single engagement record so the kickoff is the start of the deliverable, not a separate artefact. See pricing or start free.

Get Started Free