Finding Triage During a Pentest: A Live-Engagement Protocol
The two best-known artefacts of a pentest are the kickoff and the debrief. The work that decides whether the engagement protects the buyer in real time happens between them. During the testing window, findings emerge at unpredictable severities and unpredictable rates. A pentest that batches every finding for the draft report is operationally tidy and commercially costly: critical exposures sit unannounced, engineering teams hear about them all at once, and the post-report window becomes a scramble. This guide gives a working protocol for live finding triage: when to escalate, on what channel, with what evidence, and how the protocol fits into the engagement record. For the bookend on the other side of the engagement, see the pentest debrief meeting guide.
Why The Live Triage Window Is The Hidden Phase Of A Pentest
A penetration test is usually framed as four phases: scoping, kickoff, testing, and reporting. In practice, the testing phase contains a hidden fifth phase that buyers rarely scope explicitly: live triage. It is the layer of work that turns isolated tester observations into structured findings the buyer can act on while the engagement is still running. Without a written protocol, this layer is improvised differently on every engagement, and the quality of the protocol varies more across providers than the quality of the testing itself.
Three operational realities make live triage matter. First, the severity distribution of findings is not flat across the testing window: critical findings often surface in the first three to five days, and a buyer who only hears about them at draft delivery has spent the rest of the window exposed without knowing it. Second, engineering teams cannot turn a stack of twenty findings around in the same week the report lands; they need parallel runway, which requires early notification. Third, compliance-driven engagements often have regulatory or contractual obligations to surface qualifying findings within a defined window, and the engagement that buries them in the report misses the obligation entirely.
Live triage is the cheapest leverage point in the testing window. It costs an hour or two of working time per week and changes the shape of the post-engagement remediation. For the broader pattern that the live findings feed into, see pentest project management.
Set The Protocol At Kickoff, Not Mid-Test
The single highest-leverage move for live triage is to lock the protocol at kickoff. By the time the first critical finding surfaces, the buyer should already know the channel, the named contacts, the escalation threshold, the response time, and the evidence format. Engagements that try to set the protocol after the first live finding spend the first two hours of containment arguing about how to communicate.
Six things to record at kickoff:
- Live escalation threshold: the severity at or above which a finding is surfaced immediately rather than batched. Typical default is high or critical on internet-exposed assets, with critical-only on internal assets.
- Communication channel: the primary channel for live finding handovers. Common choices are a dedicated portal, a shared inbox, or a named chat channel. Email-only as a primary channel is the slowest pattern in practice.
- Named contacts: a primary buyer contact and a secondary contact, both with confirmed availability windows. Generic group inboxes without named owners are containment delays in waiting.
- Cadence for non-escalating findings: the rhythm for the structured mid-test check-ins that surface medium findings without escalation urgency.
- Pause-test triggers: the conditions under which the tester pauses testing (for example, evidence of access to production customer data, accidental denial-of-service, or out-of-scope intrusion).
- Out-of-band path: the path used if the primary contact is unreachable and a critical finding cannot wait. Often a phone number for the engagement lead.
All six should sit on the rules of engagement (ROE) document and live on the engagement record. Use the rules of engagement template to capture them at kickoff, and the kickoff meeting agenda to confirm them live with the buyer.
Severity Gates: What Gets Escalated Live
The severity gate is the threshold that splits findings into two buckets: those that ride the live channel within hours, and those that wait for the next mid-test check-in. The thresholds below are practical defaults; adjust them at kickoff to match the buyer environment and any compliance obligations.
| Severity | Internet-exposed | Internal-only | Channel |
|---|---|---|---|
| Critical | Same-day live handover | Within 24 hours | Live channel + phone for primary contact |
| High | Within 24 hours | Within 48 hours | Live channel |
| Medium | Mid-test check-in | Mid-test check-in | Scheduled cadence (every 2 to 3 working days) |
| Low / Info | Draft report | Draft report | Batched at draft delivery |
Some buyers prefer to receive every finding live regardless of severity; this works for engagements with a small expected finding count and a buyer team that can absorb the traffic. Most buyers cannot, and the live channel becomes noisy enough that the critical finding gets buried under the medium findings. Default to a strict severity gate and relax it on request rather than the reverse.
Channels: What Each Carries And What It Does Not
A common operational mistake is trying to run live triage on one channel. Each channel does one job well and a different job badly. The pattern that works is to use two or three channels with explicit roles.
- Engagement portal or finding tracker: the system of record. Findings live here with full structure: title, asset, vulnerability class, evidence, severity, CVSS vector, recommendation. Every live handover ends with a record on this channel even if it started on chat or call.
- Chat channel: the fast notification path. A line announcing a new live finding with a link to the structured record. Fast, visible, attributable. Bad for evidence: do not paste tokens or proof-of-concept payloads that should sit on the structured record.
- Phone or video call: the path for critical findings with containment urgency or for findings with reproduction nuance the buyer cannot follow asynchronously. Always followed within an hour by a structured record.
- Email: useful for the daily summary or for compliance evidence. Slow as a primary live channel. Treats every finding as the same priority unless flagged manually.
The structured record on the system of record is the one channel that must always be populated. Live notifications without a structured record drift into Slack threads that no one can audit; structured records without live notifications wait until someone happens to log in. Run both.
Anatomy Of A Live Finding Handover
The live finding handover is a smaller version of the report-grade finding. It must be usable by an engineering lead within thirty minutes of receipt, even if the lead is mid-incident or on a release. The structure below is the practical default.
- Title: vulnerability class plus the affected component. "IDOR on /api/orders/:id allows reading peer order detail" lands; "Authorisation issue" does not.
- Affected asset: hostname, IP range, repository, endpoint, or service identifier. Specific enough to route internally without a follow-up question.
- Impact: one paragraph in business terms. What the finding allows in the production environment. "Customer A can read customer B order detail" lands; "authorisation bypass" does not.
- Reproduction: the steps captured at testing time. A copy-paste request and response pair, with screenshots if the steps require UI interaction. Reproduction the buyer cannot run is reproduction the buyer will dispute.
- Severity: the tester-assigned severity with the CVSS vector that produced it. Mark as provisional. Confirm in the draft.
- Containment guidance: if the finding has live exploit potential, a short note on temporary mitigation while the engineering fix is shipped. Often a configuration toggle, a feature flag, or a WAF rule.
- Recommended fix direction: a concrete remediation path the engineering lead can scope, not a generic best practice.
- Verification path: how the retest will confirm closure. Useful so the engineering lead knows what evidence the retest will need.
For the report-grade structure that the live handover later folds into, see the pentest report template, and for the CVSS reasoning that supports severity see CVSS scoring explained.
The Mid-Test Check-In
Below the live escalation threshold, findings still need a heartbeat. The mid-test check-in is the structured cadence that surfaces medium findings, confirms scope is on track, and gives the buyer an early look at the shape of the engagement. The default cadence for a two-week engagement is twice a week, usually scheduled for the same time window so calendars do not drift.
A working mid-test check-in covers:
- Areas covered since the last check-in and areas planned for the next.
- New findings since the last check-in, grouped by severity and class.
- Status of findings already raised live, including any buyer queries.
- Any blockers (access, credentials, environment availability, scope ambiguity).
- Any potential scope discussions for the buyer to consider before the debrief.
- The next planned check-in time and channel.
For engagements run on retainer, the same pattern fits inside the retainer cadence; see pentest retainer management for the commercial layer above the engagement.
Pause-Test Triggers And When To Stop
Some live findings require pausing testing rather than just notifying the buyer. These triggers should be agreed at kickoff and recorded on the rules of engagement. The tester pauses, notifies, and waits for written authorisation before continuing. Common pause triggers:
- Live customer data exposure: evidence the tester has reached or could reach production customer data that was not in scope. Pause and confirm scope before proceeding.
- Accidental denial-of-service: any test action that has caused or could plausibly cause a service outage. Pause, document, and coordinate with the platform owner.
- Out-of-scope intrusion: the tester has crossed into systems outside the agreed scope, often via lateral movement from an in-scope foothold. Pause, document the path, and confirm scope expansion or rollback.
- Active third-party impact: the tester has touched a third-party system the buyer does not own (a vendor SaaS, a shared service). Pause and confirm the buyer authorisation to proceed.
- Pre-existing breach evidence: the tester has found evidence that the environment was compromised before testing began. Pause and hand to the buyer incident response process.
Pause-test triggers are not failure modes; they are the protocol that protects both sides when something unexpected surfaces. Engagements that proceed through one of these triggers without a pause expose the tester legally and the buyer operationally.
Provisional Severity And The Path To Final Score
The severity assigned during a live finding handover is provisional. Two things happen between the live handover and the draft report that can shift it.
- Compensating controls surface: the buyer or the platform owner names a control that meaningfully reduces exploitability. The base finding stays; the environmental score adjusts. Document the control and the adjustment so the published score has a clear basis.
- Related findings change exploit chain: the tester later discovers a finding that combines with the live finding to enable a higher-impact attack. The combined finding may carry a higher severity than the individual components. Both findings stay; the combined narrative is added.
- Asset context shifts: the buyer clarifies that the affected asset is internal-only when the tester assumed it was internet-exposed, or vice versa. Adjust the score with the new context recorded.
Mark the live severity as provisional in the structured record so the buyer does not brief the score upward to executives before it is confirmed. For the broader pattern on calibrating severity across an engagement, see the severity calibration research and the vulnerability prioritisation framework.
Parallel Remediation: When The Buyer Starts Fixing Mid-Test
Some buyers will start remediation as soon as a live finding lands. This is healthy and should be encouraged for critical and high findings, with one operational note: the engagement record must capture the live finding state separately from the remediation state, so the retest at the end of the engagement (or in the post-engagement window) has a clean before-and-after to verify against.
The pattern that works:
- The live finding is recorded in full at the time it is surfaced.
- The buyer indicates intent to remediate and a target date.
- Engineering ships the fix.
- The buyer marks the finding as ready for retest.
- The tester verifies closure during the retest window with the same reproduction the live handover carried.
- The published finding records both the original state and the post-remediation state; closure is evidenced rather than asserted.
For the operational pattern that handles this end-to-end, see vulnerability remediation tracking and the retesting workflow.
What The Engagement Record Must Carry
The engagement record is the single source of truth that survives the engagement and anchors the debrief, the final report, and the retest. For live triage to flow into the debrief without losing context, the record needs to carry:
- The rules of engagement, including the live triage protocol agreed at kickoff.
- Each live finding with timestamp, original severity, evidence, and channel of handover.
- Every severity adjustment with rationale and the contact who confirmed it.
- Buyer queries, dispute notes, and false-positive removals raised mid-test.
- Pause-test events with the trigger and the resolution.
- Mid-test check-in records with attendees and outcomes.
- Any scope changes raised mid-test, with the change-control path used.
- The link from each live finding to its final report version, so closure can be tracked end-to-end.
For the broader engagement-record model that holds kickoff, mid-test, debrief, and retest in one place, see pentest evidence management.
Common Live Triage Mistakes
- No protocol agreed at kickoff: the first live finding becomes the protocol negotiation. Two hours of containment lost arguing about channels.
- Generic group inbox as primary contact: nobody owns the inbox at the moment a critical finding lands. The finding sits unread.
- Live channel without structured record: the chat thread exists, the structured record does not, the debrief later cannot trace what was raised when.
- Structured record without live channel: the finding is logged on the portal but no one is notified; the buyer discovers it on the next login.
- Reproduction not captured at testing time: by the time the buyer asks for evidence, the tester has moved on, and the proof is harder to recover than to capture in the moment.
- Live severity briefed to executives as final: the score shifts in the draft, the executive feels misled, the relationship absorbs an unnecessary shock.
- Pause-test triggers not written: a borderline event happens, the tester continues, the buyer later argues the engagement overstepped scope.
- Mid-test check-ins skipped under delivery pressure: medium findings pile up, the draft report lands with a volume the buyer was not prepared for, the debrief becomes a discovery exercise.
- Tester doing engineering work mid-test: the tester implements remediation rather than recommending it. Independence for the retest is compromised, and the buyer pays engagement hours for engineering work.
When Live Triage Is A Compliance Obligation
For engagements driven by a regulatory or framework obligation, live triage is not optional and the protocol is often externally constrained. A few examples worth confirming at kickoff with the relevant compliance owner:
- DORA threat-led penetration tests carry obligations under Articles 26 and 27, including joint test management with named contacts, that align directly with a written live triage protocol.
- TIBER-EU testing has explicit guidance on test management and red team to control team communication during the active test phase, which is a live triage protocol by a different name.
- PCI DSS engagements that surface findings affecting the cardholder data environment may trigger reporting obligations to the QSA on a defined timeline.
- CREST accredited engagements have methodology declarations that include responsible disclosure and incident handling during the test, which the live triage protocol implements operationally.
Live Triage Quick Checklist
- Live escalation threshold agreed at kickoff and recorded on the ROE
- Primary and secondary buyer contacts named with confirmed availability
- Structured record system and live notification channel both confirmed
- Out-of-band path agreed for unreachable primary contact
- Pause-test triggers written and acknowledged by both sides
- Mid-test check-in cadence scheduled in calendars
- Live finding handover template prepared with required fields
- CVSS vector applied to every live severity, marked provisional
- Reproduction evidence captured at testing time, not after
- Buyer remediation intent and target date recorded against each live finding
- Severity adjustments documented with rationale and confirming contact
- Engagement record updated continuously, not in a batch at draft time
From Live Triage To Engagement Closure On A Single Record
Live triage is one phase of an engagement record that also covers kickoff, testing, findings, debrief, retest, and final report delivery. Keeping live findings on the engagement record (rather than in a chat thread or a tester notebook) means the decisions made under live pressure flow cleanly into the draft, the debrief, and the retest. The engagement that records live triage well is the engagement that produces a clean retest verification at the end.
SecPortal links live findings to the engagement on a tenant-branded portal: engagement management stores the engagement record and the rules of engagement, findings management stores findings with CVSS vectors, severity history, and audit trail, AI reports generate the deliverable from the live findings without re-keying, and the branded client portal gives the buyer the same view of live findings the tester sees. The retest at the end of the engagement picks up from the same record, which is why the operational pattern in the retesting workflow depends on a clean live triage layer underneath.
Frequently Asked Questions About Live Pentest Finding Triage
Run live triage and final reporting off the same engagement record
SecPortal stores the rules of engagement, live findings, severity adjustments, mid-test check-ins, draft report, debrief record, and retest evidence on a single engagement record so live triage feeds the final report instead of fragmenting across chat and email. See pricing or start free.
Get Started Free