Use Case

Security finding evidence package
developers can act on without re-discovering the work

Most security findings reach developers as a one-line description, a CVSS number, and a screenshot. Developers spend the next two days reproducing the issue, asking the security team to clarify scope, and guessing what acceptable fix evidence looks like. Run security finding evidence packaging on the engagement record so each finding ships to engineering with the reproduction steps, the request and response, the affected asset and code path, the calibrated severity, the fix expectations, the retest criteria, and the audit trail attached as one structured record. The remediation conversation starts on the evidence rather than on the rediscovery.

No credit card required. Free plan available forever.

The remediation conversation starts on the evidence, not on the rediscovery

Most security findings reach engineering as a one-line description, a CVSS number, a screenshot, and a deadline. The developer spends the next two days reproducing the finding, asking the security team to clarify scope, and guessing what acceptable fix evidence looks like. The cycle repeats per finding, per engagement, and per release. The result is a backlog where mean time to remediate is dominated by handoff friction rather than by engineering effort, and where audit lookbacks reconstruct closure history from a folder of screenshots rather than from a defensible record.

A defensible security finding is not a description on a record; it is a structured evidence package the developer, the security reviewer, the leadership view, and the audit lookback all read against the same source. This page is the per-finding evidence-packaging workflow internal security teams, AppSec teams, vulnerability management teams, and product security teams run alongside their broader programme cadence. For the workflow that governs how findings move between SDLC stage gates, pair this page with the SDLC vulnerability handoff workflow. For the upstream triage that decides which scanner output becomes a finding in the first place, pair it with the scanner result triage workflow. For the closure cadence that runs after the package is in place, pair it with the remediation tracking workflow.

Six layers of evidence and what each one looks like in healthy and broken form

A defensible evidence package is not one artefact; it is six layers that together let the developer act on the finding without re-discovering the work. The split below is the operating starting point internal security teams, AppSec teams, vulnerability management teams, and product security teams can run against to calibrate evidence quality across engagements.

LayerHealthy postureDefault failure
Layer 1: Title and structured descriptionThe finding title states the affected asset, the issue class, and the impact in concrete language a developer recognises (for example: "Reflected XSS in /search query parameter on www.acme.com"). The description uses the matched template from the 300+ template library so the issue class language is consistent across engagements and the developer reads a familiar pattern rather than improvised prose. Severity is set with the CVSS 3.1 vector and the environmental modifiers calibrated to the deployed asset.The title is a scanner output identifier or a generic pattern name (for example: "Plugin 12345" or "Possible information disclosure"). The description is a copy-paste of the scanner output. Severity is the unmodified scanner default. The developer cannot tell what asset is affected, what the impact is, or whether the calibrated severity reflects the deployed environment. The package fails before reproduction is attempted.
Layer 2: Reproduction steps a developer can re-runReproduction steps are written in the order a developer can re-run them. The steps name the environment, the user role, the input, the request method, the relevant headers, the expected response, and the observed deviation. For code findings, the trace includes the entry point, the sink, and the path between them. The steps are specific enough that a developer who has never seen the application can reach the same observation, and specific enough that the retest reads against the same script.Reproduction is a paragraph or a screenshot. The steps assume the reader already knows the application, the test data, the credentials, and the environment. The developer asks the security team three clarifying questions before reproduction succeeds. The retest later runs against an interpretation of the steps rather than against the steps themselves and the closure decision becomes ambiguous.
Layer 3: Request and response or trace evidenceThe full request (method, URL, headers, body) and the full response (status, headers, body) for the reproducing transaction live on the finding as attachments or inline blocks, with sensitive values redacted to the documented standard. For SAST findings, the trace lists the entry point file, the sink file, and the line numbers along the path. The evidence is the artefact the retest reads against, not a screenshot of the artefact.A screenshot of the browser, a screenshot of a Burp Suite tab, or a screenshot of a SAST result is the only evidence on the finding. The developer cannot copy and re-run the request. The trace cannot be replayed against the codebase. Six months later, a regression test cannot be authored against the captured artefact because the artefact is a picture rather than a structured exchange.
Layer 4: Affected asset and code-path bindingFor runtime findings, the affected asset is the verified domain, the URL or endpoint, and (for authenticated tests) the credential class the test ran with. For code findings, the repository connection (GitHub, GitLab, or Bitbucket OAuth) is attached and the file path, line range, and commit reference appear on the finding so the developer reads the issue next to the code. Asset ownership mapping resolves the routing to the named developer or team.The asset is "the application" or "the staging environment" without a verified domain reference. For code findings, the file path is missing or stale, and the developer searches the repository for the issue based on the description. The handoff fails at the routing step because no named owner is on the record and the finding sits in a queue waiting for someone to pick it up.
Layer 5: Fix expectation and acceptable remediation evidenceThe fix expectation is a verifiable claim: parameterise the database query, add the missing authorisation check on the endpoint, upgrade the dependency to the patched version, set the configuration value to the recommended default. The acceptable remediation evidence is named: the pull request reference, the SAST re-run output, the dependency manifest delta, the configuration diff, the unit test that proves the new behaviour. Closure criteria are stated against this evidence rather than against an opinion.The recommendation is generic ("fix the SQL injection vulnerability") with no statement of acceptable evidence. The developer ships a fix, the security team rejects it because it does not match an unstated standard, and the cycle repeats until institutional memory catches up. Programmes that operate this way over-index on rework and under-index on closure throughput.
Layer 6: Retest plan and closure bindingThe retest plan reads the same reproduction steps the original finding documented, in the same environment, against the same input. When the retest succeeds, the result is attached to the original finding (re-run request and response, SAST or SCA re-run, configuration verification) and the state moves to verified closed on the same record. A regression at retest reopens the original finding with the new context attached rather than creating a parallel finding the audit lookback has to reconcile.Retest is a separate ticket with new evidence that does not bind to the original finding. The audit chain has to be reconstructed from two records, two timestamps, and two artefact sets. Reopens land as new findings and the closure history of the underlying issue is invisible at the next leadership review.

Six failure modes that quietly degrade evidence quality

Evidence quality rarely fails at a single moment. It degrades in small accommodations: a screenshot instead of a request capture, reproduction steps written for the original tester rather than the next reader, a fix recommendation without acceptable evidence, a retest that lands on a parallel record. Each accommodation is reasonable in isolation; the cumulative effect is a backlog where every closure decision is a bespoke negotiation.

Evidence is a screenshot rather than a re-runnable artefact

The finding ships with a screenshot of the browser tab, the scanner UI, or the proxy view, and no structured request, response, or trace lives on the record. The developer cannot copy the artefact into a test, the retest runs against the screenshot rather than against the underlying transaction, and a regression test cannot be authored from the captured material. The fix is attaching the structured exchange itself (request, response, SAST trace) to the finding alongside the screenshot, so the picture supports the artefact rather than replacing it.

Reproduction steps assume context the developer does not have

The steps were written by the tester to remind themselves what they did, not as a runnable script for someone who has never seen the application. The developer reads them, asks three clarifying questions, and the security team eventually re-pairs with the developer to walk through the reproduction. The fix is writing reproduction steps for the next reader, not for the original tester: name the environment, the user role, the input, and the expected and observed responses in concrete language.

Severity is set without environmental modifiers

The CVSS vector is the auto-imported scanner default and the calibrated severity does not reflect the deployed environment (tenancy isolation, exposure to the internet, data sensitivity, compensating controls already in place). The developer sees a severity that does not match the engineering judgement of impact and the calibration debate happens after the developer has already started or finished the fix, when the work is hardest to redirect. The fix is calibrating CVSS environmental modifiers at finding open and recording the rationale on the record so the developer reads the calibrated severity rather than the raw default.

Fix expectations are missing or generic

The recommendation is a phrase from the template ("validate user input", "use parameterised queries", "patch to the latest version") without a verifiable claim or named acceptable evidence. The developer ships a fix that the security team rejects because it does not satisfy an unstated standard. The fix is writing the verifiable claim and the acceptable evidence on the finding before the handoff, so the developer reads the closure criteria rather than guessing them.

Routing happens in chat instead of on the record

The handoff to the developer is a Slack ping, an email, or a hallway conversation. The finding is not assigned to a named owner on the record and the audit lookback cannot reconstruct who knew what when. The fix is making the handoff a state event on the finding with the named developer or team owner attached, with team management RBAC granting workspace access or with the branded client portal granting scoped access for cross-organisation cases.

Retest creates a parallel record that does not bind to the original

The retest is a fresh finding, fresh evidence, and a fresh state machine. The closure history of the underlying issue is split across two records and the audit lookback has to walk both. The fix is binding the retest result (re-run request and response, SAST re-run, configuration verification) to the original finding so the state machine is one record from open to closed, and a regression at retest reopens the original record rather than creating a new one.

Six fields every evidence package has to record on the finding

A defensible package is six concrete fields on the finding record, not an abstract paragraph in a security playbook. Anything missing from the list below is a known gap in the package rather than a detail that surfaces later as a developer rework cycle, a leadership-review question, or an audit finding the security reviewer has to reconstruct from a chat search.

Issue class title and structured description

A title in concrete asset-and-issue-class language (not a scanner identifier), a description sourced from the closest 300+ template match so issue-class language stays consistent across engagements, and a free-text addendum that names the deployment-specific context. The developer recognises the pattern from the first read instead of decoding improvised prose.

Calibrated CVSS 3.1 vector with environmental rationale

The CVSS 3.1 base vector inherited from the scanner or the template, the environmental modifiers calibrated against the deployed asset (tenancy, exposure, data sensitivity, compensating controls), and the rationale for the recalibration on the finding so severity drift is reviewable. The developer reads the calibrated severity, not the raw scanner default.

Reproduction steps, environment, and observed behaviour

Numbered steps a developer can re-run, the environment they apply to, the user role required, the input that triggers the issue, the expected response, and the observed deviation. The steps are the contract the retest reads against and the regression test (when one is authored) is built from.

Request and response or SAST trace artefact

The full reproducing request and response (with sensitive values redacted to the documented standard) on runtime findings, or the entry-point and sink trace with file paths and line numbers on SAST findings. The artefact is the structured exchange, not a screenshot of the structured exchange. The retest replays from this artefact.

Affected asset, code-path binding, and named owner

The verified domain plus URL or endpoint plus credential class for runtime findings, the repository plus file plus line range plus commit reference for code findings, and the named developer or team owner inherited from the asset ownership map. The handoff is a routing decision against this binding, not a hallway negotiation.

Fix expectation, acceptable evidence, and closure criteria

The verifiable claim the developer is expected to satisfy, the named acceptable remediation evidence (pull request reference, SAST re-run, dependency manifest delta, configuration diff, unit test), and the closure criteria the security reviewer will check. The developer reads the closure decision before the work starts rather than after the rejection.

Evidence packaging operating checklist

At finding open, at handoff, and at retest, the security reviewer walks a short checklist against the record. Each item takes minutes; missing any one of them is the source of the failure modes above and the rework cycles that follow.

  • Open the finding on the engagement record with the title in concrete asset-and-issue-class language, the closest matching template applied, and the calibrated CVSS 3.1 vector with environmental modifiers reasoned on the record
  • Capture reproduction steps in the order a developer can re-run them, naming the environment, user role, input, expected response, and observed deviation in concrete language rather than in tester shorthand
  • Attach the reproducing request and response (or the SAST trace with file paths and line numbers) to the finding as the structured artefact, with the screenshot supporting the artefact rather than replacing it
  • For runtime findings, attach the verified domain reference and the credential class used; for code findings, attach the repository connection (GitHub, GitLab, or Bitbucket OAuth) and the file path, line range, and commit reference
  • Resolve the named developer or team owner from the asset ownership map and assign the finding on the record so the routing is not a hallway negotiation later
  • Write the fix expectation as a verifiable claim (parameterise the query, add the authorisation check, upgrade the dependency, set the configuration value) and name the acceptable remediation evidence the closure decision will read against
  • Hand the finding to the developer through workspace team management RBAC or the branded client portal so the conversation lives on the same record the security team operates against
  • Retest against the recorded reproduction steps in the same environment, attach the retest evidence to the original finding, and move the state to verified closed on the same record
  • Trigger AI report generation against the engagement so the leadership view, the developer work-item summary, and the audit lookback all read against the same evidence pack on the finding record

How the evidence package looks in SecPortal

Evidence packaging runs on the same feature surfaces the rest of the security programme already uses: findings management, document management, repository connections, code scanning, authenticated scanning, the client portal, AI report generation, and the activity log. The discipline is binding the six layers to a single finding record so the developer, the security reviewer, the leadership view, and the audit lookback all read against the same evidence pack.

Findings management as the evidence record

Every finding lives in findings management with the title, the calibrated CVSS 3.1 vector, the affected asset, the named owner, and the structured description sourced from the 300+ template library. The evidence package is a property of the record rather than a folder of attachments living somewhere else.

Document management for reproducible artefacts

Reproduction artefacts (request and response captures, SAST traces, screenshots, configuration diffs, supporting payloads) attach through document management on the engagement so the artefact lives next to the finding rather than in a side folder the next reader cannot find.

Repository connections for code-path binding

For code findings, repository connections through GitHub, GitLab, or Bitbucket OAuth let the file path, line range, and commit reference appear on the finding. The developer reads the finding next to the code instead of searching the repository from a free-text description.

Code scanning for SAST and SCA evidence

Code scanning (Semgrep SAST and dependency analysis) generates the source trace, the affected dependency reference, and the suggested remediation that feed the evidence package directly. The retest re-runs the same scanner against the same scope so the closure evidence matches the open evidence.

Authenticated scanning for behind-login evidence

Authenticated scanning with cookie, bearer, basic, or form-login credentials produces evidence for findings behind the login screen. AES-256-GCM encrypted credential storage keeps the credentials defensible and the credential lifecycle event trail lands on the activity log.

Client portal for cross-organisation handoff

When the developer sits in a different organisation (a vendor team, a client engineering team, a contracted developer), the branded client portal on the tenant subdomain grants scoped access to the finding so the developer reads the evidence pack on the same record the security team operates against, without granting full workspace access.

AI report generation against the evidence pack

AI report generation derives the remediation summary, the technical writeup, and the executive narrative from the live finding record so the leadership view, the developer work-item summary, and the audit lookback all read against the same evidence pack rather than against three different documents.

Activity log as the evidence audit trail

Every state event (finding opened, evidence attached, severity recalibrated, fix expectation recorded, handed off, retested, reopened, closed, accepted with exception) lands on the activity log with timestamp and user attribution. The CSV export is the evidence the SOC 2, ISO 27001, PCI DSS, and NIST SP 800-53 review reads behind the closure claim.

Compliance tracking against the evidence record

Compliance tracking ties the evidence pack on each finding to the relevant control statements so the audit lookback reads the finding evidence as the operating evidence behind the framework adoption claim, not as a parallel deliverable.

What auditors expect from finding-level evidence

Finding-level evidence is the record the assessor reads behind the closure claim. The frameworks below all expect the programme to demonstrate that severity is calibrated, the affected asset is named, the reproduction is reproducible, the fix expectation is verifiable, and the closure trail is bound to the original finding. An evidence pack that reads as one record satisfies the audit ask without the post-hoc reconstruction sprint.

FrameworkWhat the audit expects
ISO 27001 Annex AA.8.8 (management of technical vulnerabilities) for the calibrated severity, the evidence chain, the named owner, and the closure trail per finding; A.5.10 (acceptable use of information and assets) for the asset binding on the finding; A.5.36 (compliance with policies and standards) for the recorded fix expectation and acceptable evidence standard. The evidence pack on the finding is the operating evidence behind the clause.
SOC 2 (TSC)CC4.1 (monitoring activities) for the retest cadence and the validation evidence per finding; CC7.1 (system monitoring) for the source detection (scanner module, manual test, scanner import); CC7.4 (responding to security incidents) for the structured handoff and closure trail. The evidence pack reads as the operating evidence behind the trust services criteria.
NIST SP 800-53 Rev. 5RA-5 (vulnerability monitoring and scanning) for the source detection and the calibrated severity per finding; SI-2 (flaw remediation) for the fix expectation, the closure criteria, and the retest evidence; SI-5 (security alerts, advisories, and directives) for the threat-context inputs the calibration reads against. The finding record is the evidence the assessor reads against the controls.
PCI DSS v4.06.3.3 (remediating identified vulnerabilities), 11.3 (vulnerability management programme), and 11.4 (penetration testing programme) for the structured finding record, the calibrated severity, and the retest evidence per finding. The evidence pack is the operating evidence the QSA reads behind the remediation claim.
OWASP SAMMVerification function (Issue Management practice) for the structured finding record, the named owner, the calibrated severity, the fix expectation, and the closure criteria; Operations function (Defect Management practice) for the retest cadence and the activity log audit trail. The evidence pack supports the maturity-level claims that depend on consistent issue handling across engagements.

Where the evidence package sits in the wider security programme

The evidence package is the per-finding artefact that sits between scanner triage and remediation closure. It composes with the rest of the security programme so the per-finding contract stays connected to the cross-engagement and programme-level work running alongside it.

Upstream and adjacent

Evidence packaging depends on scanner result triage for the validated finding the package wraps, asset ownership mapping for the named developer or team owner the routing resolves to, vulnerability prioritisation for the calibrated severity and the queue order, and SDLC vulnerability handoff for the stage-gate movement the package contract survives.

Downstream and reporting

Evidence rolls into remediation tracking for the closure cadence, retesting for the validation against the recorded reproduction steps, vulnerability acceptance and exception management for the structured exception flow when closure is not possible, security leadership reporting for the cadence leadership reads the closure trail against, and audit evidence retention and disposal for the long-tail evidence chain.

Pair the evidence packaging workflow with the buyer and operator material

The evidence package is operational; the surrounding research and buyer material explain the throughput and economics inputs the package quality calibrates against. Pair this workflow with the vulnerability remediation throughput research for the throughput inputs that calibrate evidence-quality investment, the security finding deduplication economics research for the upstream economics of clean discovery, the security findings deduplication guide for the operational dedup discipline, the secure code review checklist for the upstream code-review discipline, and the automating security findings management guide for the broader operating-model thinking. Compliance evidence reads against the ISO 27001 Annex A controls, the SOC 2 Trust Services Criteria, the NIST SP 800-53 SI and RA control families, and the OWASP SAMM Verification and Operations functions.

Buyer and operator pairing

Evidence packaging is the per-finding contract AppSec teams run as the bridge between security testing and engineering work, product security teams run as the standard for findings against the product surface, vulnerability management teams run as the discipline that makes the backlog reviewable, internal security teams run as the operating standard across engagements, security engineering teams run when the package binds to platform-owned assets, and DevSecOps teams run when the package binds to a code path, a build, or a pipeline owner. CISOs read evidence-package quality as the leading indicator behind closure throughput and regression rate.

What good evidence packaging feels like

Reproduction is the artefact

Reproduction steps, the request and response (or the SAST trace), and the environment description live on the finding as a re-runnable script. The screenshot supports the artefact rather than replacing it. The next reader (developer at handoff, reviewer at retest, auditor at lookback) can re-run the event without asking the original tester for context.

Fix expectation is verifiable

The fix expectation is a verifiable claim (parameterise the query, add the authorisation check, upgrade the dependency) and the acceptable remediation evidence is named (pull request reference, SAST re-run, dependency manifest delta). The developer reads the closure decision before starting the work, not after the rework cycle.

Closure binds to the original

Retest evidence attaches to the original finding rather than to a parallel record. A regression at retest reopens the same finding with the new context attached, and the closure history reads as one record from open to closed. The audit chain is not a multi-record reconstruction.

The same record everyone reads

The developer, the security reviewer, the leadership view, and the audit lookback all read against the same evidence pack on the engagement record. The leadership summary from AI report generation, the developer work-item, and the audit lookback are different views of one source rather than three documents that have to be reconciled.

A defensible security finding is the structured evidence package on the engagement record, not the description that accompanies the scanner output. Run the package on the finding so reproduction is re-runnable, the affected asset is named, the calibrated severity carries its rationale, the fix expectation is verifiable, and the closure trail binds to the original finding. For the upstream triage that decides which scanner output becomes a finding, pair this workflow with the scanner result triage workflow; for the stage-gate workflow the package contract survives, pair it with the SDLC vulnerability handoff workflow; for the closure cadence that runs after the package is in place, pair it with the remediation tracking workflow.

Frequently asked questions about packaging security findings for developers

How is a security finding evidence package different from a scanner report?

A scanner report is the raw output of a detection run; it lists the conditions the scanner matched on. A security finding evidence package is the structured record on the engagement that turns one of those conditions into a developer-ready remediation work-item: the calibrated severity, the affected asset and code path, the reproducible artefact, the fix expectation, the acceptable remediation evidence, and the retest plan. The scanner report is an input to the package; the package is the artefact the developer, the security reviewer, and the audit lookback all read against.

Why is reproduction step quality more important than the screenshot?

A screenshot is a picture of an event. Reproduction steps are the script that lets the next reader (the developer at handoff, the security reviewer at retest, the auditor at lookback) re-run the event in the same environment with the same input and observe the same behaviour. A finding with great screenshots and weak reproduction steps falls apart at retest because the closure decision has to be made against an interpretation of the steps rather than against the steps themselves. Programmes that invest in reproduction step quality see lower rework rates, shorter mean time to remediate, and cleaner audit lookbacks.

What does a fix expectation look like for a SQL injection finding?

The fix expectation is a verifiable claim: parameterise the affected query in the named module so user-controlled input cannot be interpreted as SQL syntax, and replace any string-concatenation construction of the query with the parameterised form. The acceptable remediation evidence is the pull request reference, the SAST re-run output that no longer flags the code path, and a unit test that submits the original payload and confirms it is treated as data rather than as SQL. The closure criteria are stated against this evidence, so the developer reads what counts as acceptable before starting the work rather than after the rework cycle.

How does the package behave when the developer is in a different organisation?

For cross-organisation handoff (a vendor team, a client engineering team, a contracted developer), the branded client portal on the tenant subdomain grants the developer scoped access to the finding so the conversation lives on the same record. The developer reads the calibrated severity, the reproduction steps, the request and response, the fix expectation, and the acceptable evidence standard from the engagement record without being granted full workspace access. The retest result attaches to the original finding the same way it does for an internal developer.

How does the evidence package handle severity changes during remediation?

Severity recalibrations land on the same finding record with the rationale (a compensating control was confirmed in place, the affected path was found unreachable in the build, the asset was downgraded in tier, runtime exposure changed) and the activity log captures the change with timestamp and user attribution. The developer and the security reviewer both read the calibrated severity from the record rather than from a parallel meeting note. Severity drift is reviewable as a property of the record and the audit lookback can reconstruct why the severity at closure differs from the severity at open.

What does retest evidence look like for a dependency upgrade fix?

The retest evidence for a dependency upgrade is the dependency manifest delta showing the affected package moved to the patched version, the SCA re-run on the post-fix build that no longer flags the affected dependency, and (where applicable) the test run that exercises the affected functionality with the new version. The retest evidence binds to the original finding and the state moves to verified closed on the same record. A regression at retest (the upgrade is reverted, the affected functionality breaks, a transitive dependency reintroduces the issue) reopens the original finding with the new context attached.

How does the package fit alongside the SDLC vulnerability handoff workflow?

The SDLC vulnerability handoff workflow governs how findings move between SDLC stage gates (design, code, build, DAST, pre-production, operations) as a programme. The evidence package workflow is the per-finding contract that survives those stage gates: each finding carries the same calibrated severity, the same reproduction steps, the same affected asset binding, and the same fix expectation as it transits between stages. SDLC handoff is the programme layer; the evidence package is the record that handoff reads.

Does the platform automate any of the evidence package fields?

The platform structures the record so the field set is explicit, but the security reviewer is the author of the calibrated severity, the reproduction steps, the fix expectation, and the closure criteria. Scanner imports populate the title, the source detection, and the initial severity (and sometimes the request and response, the SAST trace, and the affected asset). The 300+ template library populates the structured description and a starting remediation guidance. AI report generation drafts the leadership and developer narratives from the live record. The reviewer judgement that turns inputs into a defensible package is the work that the platform supports rather than replaces.

How it works in SecPortal

A streamlined workflow from start to finish.

1

Open the finding with calibrated context, not a one-line description

When a finding is opened on the engagement record, capture the title, the CVSS 3.1 vector with environmental modifiers, the affected asset (host, port, URL, code path, or dependency), the original detection source (scanner module, manual test, scanner import from Nessus, Burp Suite, or CSV), and the named owner inherited from the asset ownership map. Pick the closest of the 300+ finding templates so the description and remediation guidance are concrete from the first save instead of a freeform paragraph the developer has to interpret. The evidence package starts with the structured record, not with a future cleanup pass.

2

Attach reproducible evidence, not screenshots without context

Capture the reproduction steps in the order a developer can re-run them, the request and response pair (or the SAST trace for code findings), the supporting screenshot or video, and the environmental conditions the reproduction depends on (authenticated user role, browser, environment, dataset). Document management on the engagement holds the supporting artefacts so the developer reads a complete package rather than a screenshot they have to translate into a runnable test. The reproduction steps are the contract the retest reads against.

3

State fix expectations and acceptable remediation evidence

Write the fix expectation as a verifiable claim rather than as a generic recommendation. The verifiable claim names the change the developer is expected to make (parameterise the query, add the missing authorisation check, upgrade the vulnerable dependency, harden the configuration value), the validation evidence the security team will accept (the pull request reference, the SAST re-run, the dependency manifest delta, the configuration diff, the unit test that proves the new behaviour), and the closure criteria. The developer reads the evidence the closure decision will check rather than guessing what counts as acceptable.

4

Bind the finding to the code path or asset the developer owns

For code findings, attach the repository connection (GitHub, GitLab, or Bitbucket OAuth) and the file path, line range, and commit reference so the developer reads the finding next to the code. For runtime findings, attach the verified domain, the URL or endpoint, and the credential class used during authenticated testing. The developer does not search for the location of the issue; the location is on the record. Asset ownership mapping resolves the routing to the named developer or team without a hallway negotiation.

5

Hand the finding to the developer through the same record, not a side channel

The handoff to engineering is a state event on the finding rather than a Slack message or an email thread. The developer is granted access through team management RBAC (member or viewer role on the engagement) or, for cross-organisation cases, through the branded client portal so the developer reads the finding on the same record the security team operates against. Notifications and notes attach to the finding so the conversation has provenance and the audit trail does not depend on a chat search six months later.

6

Retest against the recorded reproduction steps and bind the result to the original finding

When the developer claims the fix is in place, run the retest against the recorded reproduction steps in the same environment the original finding documented. Attach the retest evidence (re-run request and response, SAST or SCA re-run, configuration verification, manual reproduction attempt) to the same finding rather than as a parallel record. The retest result is bound to the original finding so the audit chain reads as one record from open to closed, and a regression at retest reopens the same record with the new context attached rather than starting a new finding.

7

Close the loop with AI report generation and the activity log

AI report generation derives the remediation summary, the executive narrative, and the closure trail from the finding record so the leadership view, the developer work-item, and the audit lookback all read against the same source. The activity log captures every state transition (opened, evidence attached, fix expectations recorded, handed off, retested, reopened, closed, accepted with exception) with timestamp and user attribution and the CSV export is the trail the SOC 2, ISO 27001, PCI DSS, and NIST SP 800-53 review reads behind the closure claim.

Ship developer-ready security findings on one defensible record

Reproduction steps, request and response, fix expectations, retest criteria, and closure trail attached to one engagement record. Start free.

No credit card required. Free plan available forever.