Enterprise14 min read

Security Champions Program Guide: How to Scale AppSec Across Engineering

A central AppSec function does not scale to the size of a modern engineering org. Once you have more than a handful of product teams, the bottleneck is not how good your reviewers are; it is how many reviews can physically pass through them. A security champions programme distributes ownership of finding triage, secure code review, threat modelling, and remediation back into the teams that wrote the code, without giving up the consistency, governance, and audit evidence that the central function still needs to keep. This guide walks through how to select champions, design the role, structure training, run the finding handoff, and operate the whole programme on a single record so the central team can read the state at any time.

Why a Security Champions Program Matters

A security champion is a developer or platform engineer inside a product team who carries dedicated security responsibility on behalf of that team. The champion is not an AppSec specialist on rotation, and is not a dotted-line member of the security team. They are an engineer who already ships code in the team, already sits in the planning meetings, and already gets paged when a deploy goes wrong. Adding a champion to that team makes security a continuous, in-team activity rather than a quarterly visit from a function that is permanently outnumbered.

The economics are straightforward. Without champions, every secure code review, every finding triage call, and every threat model is a request that has to travel from the product team to the central AppSec function and back. A team of five reviewers cannot hold quality conversations with eighty product teams every sprint, no matter how good their tooling is. The result is the pattern most enterprises know well: triage queues grow, review SLAs slip, threat models go stale, and findings sit in spreadsheets while developers wait for someone to tell them what to do. Champions break that pattern by handling the first-pass conversation inside the team where the code lives.

For internal security leaders, AppSec leads, and product security organisations, the right framing is that a champions programme is not a substitute for the central function. It is the way the central function scales without growing linearly with engineering. The central team still defines the standard, owns the policy, runs the training, calibrates severity, signs off on the controversial calls, and produces the programme-level read for leadership. Champions extend the reach of that standard into every team that ships code. Done well, the programme also becomes the primary AppSec recruiting funnel: the engineers who take the work seriously self-select as future security hires.

Programme Goals: What a Champions Program Should Actually Deliver

Before you select anybody, write down what the programme is supposed to deliver in six months. Without this, every champion drifts into a different job and the central function cannot tell whether the programme is working. Most successful programmes pick three or four operating outcomes from the list below and tie them to measurable signals that already exist in the engagement record.

Faster Finding Triage Inside the Team

Findings raised against the team are triaged by the champion within an agreed window. The central function only steps in for severity disputes, cross-team findings, or findings that cross compliance boundaries. Measure the delta between the time a finding is opened on the team and the time the champion records the triage decision. The same record should already track who triaged it and when. Our breakdown of live finding triage applies inside a champions programme as the playbook the champion follows during their portion of the day.

First-Pass Secure Code Review on the Team

Pull requests above an agreed risk line get a first-pass security review from the champion before they reach the central AppSec queue. The central reviewer still owns the final sign-off for high-risk changes, but ninety percent of the noise (input handling, sane defaults, dependency hygiene, missing authorisation checks) is caught at the source. The secure code review checklist is a good starting point for the champion review template.

Lightweight Threat Modelling for Major Changes

Major design changes get a one-page threat model produced by the champion before implementation starts. The central function reviews and challenges the threat model rather than authoring it. See the threat modelling guide for a structure that fits inside a normal design review.

Remediation Driven Inside the Team

Findings owned by the team are driven to closure by the champion. The champion is the recipient and the chaser, not just the relay. This is where most programmes either succeed or quietly die: if the champion cannot get team time to fix issues, the role becomes ceremonial. Tie remediation throughput to the same SLAs the rest of the organisation uses, including the policy targets in the vulnerability SLA workflow and the cycle-time stages covered in our remediation throughput research.

Programme Visibility for the Central Function

The central AppSec or product security function should be able to read the state of every team without having to chase chat threads. That is a tooling and recordkeeping problem more than an organisational one, and we cover it in the operating-record section below.

Selecting Champions: Who to Pick and Why

Selection is the single biggest predictor of whether the programme works. Get it wrong and the programme is a checkbox. Get it right and the programme compounds: champions become senior engineers, mentor junior engineers, and seed the next generation of champions on adjacent teams.

A champion needs three properties in roughly this order. First, the champion needs technical credibility inside the team. The champion should already be respected by their peers, otherwise security guidance will be quietly ignored. Second, the champion needs an instinct for the security mindset, the ability to think about how a system fails rather than how it succeeds. This is teachable, but it is faster to recruit for it than to build it from a standing start. Third, the champion needs the bandwidth to do the work. A senior engineer who is already at one hundred and twenty percent capacity will not be a good champion no matter how interested they are.

Avoid two common selection mistakes. Do not appoint the most junior person on the team because they looked enthusiastic in an offsite session. The role is more political than technical: the champion has to push back on senior engineers, decline scope expansions, and call findings critical when the team wants to call them low. A junior engineer cannot do that consistently. Do not appoint the team lead either. The team lead is already accountable for delivery, and their incentive to ship will reliably beat their incentive to slow down for a security review. Pick a senior engineer who is not the team lead but is trusted by the team lead.

Make the role a real role. Allocate explicit time, typically twenty to thirty percent of a sprint, and write the responsibilities into the engineer's objectives so it is not invisible work. Champions who do the work in their spare time burn out within two quarters. Champions whose objectives include the programme outcomes stay in the role for years.

Defining the Champion Role: Responsibilities and Boundaries

Write the role description as a one-page document and circulate it to engineering leadership before the programme launches. The document should be concrete enough that an engineering manager can decide whether to nominate a particular engineer, and concrete enough that a champion knows when to push back on a request that is out of scope. The scope and boundaries below have worked across most internal AppSec organisations.

In Scope for the Champion

  • First-pass triage of findings raised against the team.
  • First-pass security review on pull requests above the risk line.
  • Authoring lightweight threat models for major design changes.
  • Driving remediation to closure on findings the team owns.
  • Capturing retest evidence and updating the engagement record.
  • Running team-level training and brown-bag sessions on common pitfalls.
  • Representing the team at the cross-team security syndicate (covered later).

Out of Scope for the Champion

  • Final severity calls on contested findings (central function decides).
  • Approving exceptions or risk acceptances above a defined threshold.
  • Cross-team findings that span more than one product owner.
  • Penetration testing scope, scheduling, or vendor selection.
  • Compliance interpretation across frameworks.
  • Acting as the final security sign-off for high-risk releases.

Authority and Escalation

The champion has authority to triage and to challenge engineering decisions inside the team, but no authority to authorise risk on behalf of the organisation. Anything that crosses the agreed acceptance threshold escalates to the central function and is logged on the same record. This pattern keeps the central function in control of risk while still letting champions do real work. The decision-record artefact lives in our vulnerability acceptance and exception management workflow, which covers the per-decision and org-wide ledger.

Training Curriculum and Onboarding

Champions arrive with strong engineering skills but uneven security backgrounds. The programme owns their training. Plan the first six weeks deliberately and treat the curriculum as a continuing commitment rather than a one-off bootcamp.

A working onboarding curriculum has four chunks. The first chunk is foundations: the OWASP Top 10, the CVSS 3.1 vector, the team's severity calibration, the org's remediation SLAs, and the vocabulary the central function uses on findings. Our OWASP Top 10 explained and CVSS scoring explained posts are designed to be onboarding reading. The second chunk is review skills: the secure code review checklist, the threat modelling pattern, and a worked walk-through of a real finding from intake to closure. The third chunk is the team-specific surface: the technologies the team owns, the authentication model, the data the team handles, the deployment topology, and the threat actors that actually matter. The fourth chunk is shadowing: the new champion sits in on a live triage session, a live secure code review, and a live retest from the central function before owning anything alone.

After onboarding, run a continuing curriculum that mirrors the programme's actual workload. A monthly syndicate meeting where champions share recent findings, review near-misses, and walk through one threat model is a strong rhythm. A quarterly deep-dive on a specific topic (TLS pitfalls, OAuth edge cases, container hardening) keeps the technical content moving. An annual review with the central function ties the programme back to engineering objectives.

Make the training visible. Champions should be able to point to a list of completed training modules at performance review time. This is not just career-development theatre. It is how the programme survives organisational changes: when a new VP of engineering asks why fifteen engineers have a security allocation, the training record is the answer.

Finding Handoff: From the Central Function to the Champion

The handoff from the central function to the champion is the moment most programmes fall over. The most common failure is not the volume of findings; it is that findings arrive without enough context for the champion to act, so they bounce back to the central function for clarification. Design the handoff deliberately and most of that overhead disappears.

A clean handoff has six elements on every finding: a precise reproduction case, the affected component mapped to the team that owns it, the severity with the CVSS vector, the remediation guidance with language-specific examples, the agreed SLA, and the named champion as the assignee. If any of these is missing, the handoff is incomplete and the central function should not pass the finding across yet. This is the same record the rest of the organisation reads from, so any update the champion makes (status, evidence, retest result) is visible to the central function in real time without an extra status meeting.

Wire the handoff into the same workflow that the central function already runs. Our scanner result triage workflow covers the upstream side, where scanner output becomes triaged findings, and the remediation tracking workflow covers the downstream side, where the champion drives the finding to closure with retest evidence on the same record. The vulnerability prioritisation workflow governs which findings go to the champion first.

Set a default cadence. A standing weekly thirty-minute slot between the central reviewer and the champion clears the queue, addresses ambiguity, and prevents the handoff from becoming a long asynchronous chain that nobody wants to read. Treat it as a small operational meeting, not a programme review.

RBAC and the Single Operating Record

The programme runs on a single operating record. That sounds obvious, but many programmes operate from three or four parallel records: a tracker for the central function, a spreadsheet for the champion, a chat thread for the team lead, and a ticket queue for the engineers doing the work. Each record drifts from the others within a sprint, and the central function ends up reading the wrong number at programme reviews.

The fix is to scope every actor to the same record using role-based access control. The central function operates as workspace admin or member. Champions operate as members on the engagements that cover the services their team owns. Engineers fixing the issue see the finding through a scoped portal view that shows them the finding, the severity, the evidence, and the remediation guidance, without exposing the full platform. SecPortal's team management feature implements RBAC tiers (owner, admin, member, viewer, billing) so this scoping is a configuration decision rather than a custom build, and workspace-enforced MFA with AAL2 session promotion gates the access at the session layer. The read-only portal view covers the team-level audience that needs to read finding state without seeing the rest of the programme.

Anchor the work on the engagement record. SecPortal's findings management feature tracks each finding with a CVSS 3.1 vector, owner, evidence, and remediation status, and the engagement management feature carries the scope, attached documents, and team assignment for the work the champion is doing. Importing prior findings via Nessus, Burp Suite, or CSV keeps the record continuous when champions take over an existing service.

Governance Cadence: Syndicate, Reviews, and the Central Function

The programme runs on three rhythms that the central function controls. Without them, champions drift and the central function loses the read.

The Champions Syndicate

A monthly meeting that brings every champion into one room with the central function. The agenda is the same every month: recent findings worth highlighting, near-misses, threat models in flight, training topic of the month, and a programme update from the central lead. The syndicate is the programme's internal community; it is where champions learn from each other and stay connected to the central standard. Keep it tight (sixty minutes) and keep the format consistent.

Quarterly Programme Review

A quarterly review with engineering leadership where the central function reports on programme outcomes. Use the same numbers the rest of the organisation runs on: triage time, remediation throughput, retest closure, training completion, and the count of cross-team findings that the programme caught at the source. Pair this with security leadership reporting so the AppSec read sits inside the programme-wide read rather than next to it.

Annual Recalibration

An annual review where the central function recalibrates the role description, training curriculum, and severity calibration. Engineering changes (new languages, new platforms, new threat models) show up here. So do programme problems: champions who are not getting team time, teams that have outgrown a single champion, or content gaps in the curriculum. Treat the annual recalibration as a first-class commitment, not a paragraph at the end of a quarterly review.

Audit Evidence and Compliance Mapping

For internal AppSec organisations sitting inside an enterprise, the champions programme is also an audit artefact. Multiple frameworks expect named ownership, documented training, and demonstrable triage behaviour, and a champions programme answers most of those questions if you keep the evidence on the same record the programme operates on.

ISO 27001 Annex A 8.8 (Management of technical vulnerabilities) expects a documented process for identifying, evaluating, and treating technical vulnerabilities, with named ownership for the evaluation. SOC 2 CC7.1 expects monitoring of system components and a defined response. PCI DSS Requirement 6.3 expects identification, prioritisation, and remediation of vulnerabilities tied to a documented role. NIST SP 800-53 RA-5 expects vulnerability scanning with response. Each of these maps naturally to the champion-as-owner model when the role description, training record, and per-finding triage decisions live on the same engagement record.

SecPortal's compliance tracking feature maps findings and controls to ISO 27001, SOC 2, Cyber Essentials, PCI DSS, and NIST and exports the evidence pack as CSV when the auditor asks for it. The activity log keeps the timestamped chain of who did what (triage, status change, evidence upload, retest, closure) for findings, engagements, scans, documents, comments, invoices, and team changes, with plan retention of 30, 90, or 365 days. Pair the programme with our audit evidence half-life research to plan how long the evidence stays current and our security control drift research to plan recapture cadence between audits.

Document training as evidence. The programme's training curriculum, completion records, and the syndicate cadence are themselves audit artefacts. Attach the training material to the engagement record using the document management feature so the auditor reads training evidence from the same record they read the finding evidence from.

Metrics That Matter (and Metrics That Mislead)

Pick three or four programme metrics. Resist the temptation to instrument everything. The metrics that matter measure outcomes the central function actually wants to change. The metrics that mislead measure activity, which is easy to game and rarely correlates with outcome.

Useful Programme Metrics

  • Median time from finding open to first triage decision (champion latency).
  • Percentage of findings closed within SLA across the team.
  • Reopen rate on findings the champion closed (calibrates triage quality).
  • First-pass review coverage on pull requests above the risk line.
  • Cross-team findings caught at the source (champion versus central detection).
  • Training completion against the curriculum.
  • Retest closure latency on the team.

Misleading Programme Metrics

  • Number of findings raised by champions (volume rewards noise).
  • Number of pull requests reviewed without a quality signal.
  • Number of training hours per champion (input, not outcome).
  • Total findings closed without a calibration on severity or reopen rate.
  • Champion satisfaction surveys without a behavioural signal underneath.

For the throughput-side metrics, anchor the definitions to the cycle-time stages in our remediation throughput research so the programme reads the same numbers the wider VM function reads, and use the reopen rate research to set a sensible threshold for when a champion's triage quality needs a coaching conversation.

Common Failure Modes and How to Avoid Them

Champion Becomes a Liaison, Not an Owner

The most common failure: the champion forwards everything to the central function and the programme becomes a relay layer. The fix is to give the champion authority and the time to do the work, and to escalate only on the contested calls. If the champion is forwarding every finding, the role description was not specific enough, the training was incomplete, or the team did not allocate real time. Diagnose the cause before changing the champion.

Champion Burns Out

A champion doing the work in their spare time burns out within two quarters. The fix is upstream: allocate the time explicitly, write it into objectives, and make sure the engineering manager shares accountability for the programme outcomes. If the role takes more than thirty percent of a champion's sprint capacity, the team probably needs a second champion, not a more efficient one.

Programme Becomes a Compliance Theatre

If the programme exists because an auditor asked for it, the central function will measure the wrong things and champions will deliver the wrong outcomes. The fix is to keep the operating outcomes (triage time, remediation throughput, retest closure) ahead of the audit-evidence outcomes (training records, role assignments, framework mapping) in the programme review. The evidence is a by-product of doing the work well, not the point of the work.

Central Function Loses the Read

If the central function has to chase chat threads to know the state of any team, the operating record is fragmented. The fix is to consolidate finding state, triage decisions, retest evidence, and exception decisions onto the same record every actor already uses. Our security tool consolidation workflow covers the migration path for getting back to one record.

Programme Stops Recruiting

A champions programme that does not recruit new champions every six months is one rotation away from disappearing. The fix is to make champion alumni a recognised group: champions who rotate out stay in the syndicate as alumni for one cycle, and engineering managers nominate replacements early enough that the new champion shadows the outgoing one for at least one sprint.

Where the Programme Sits in the Wider Security Org

A champions programme is one workflow inside a wider AppSec or product security organisation. It sits alongside vulnerability management, compliance, scanner operations, and security leadership reporting. For an in-house engineering-side AppSec function, the programme is the team's primary scaling mechanism and the natural pairing with the SecPortal for AppSec teams workflow. For a cross-cutting product security organisation that sits between engineering, AppSec, vulnerability management, and incident response with PSIRT-style intake on top, the programme drives the engineering-side surface of the wider SecPortal for product security teams scope. For the platform engineers who run the scanner fleet, repository connections, and credential vault that the programme depends on, the SecPortal for security engineering teams workflow covers the underlying tooling stack the champions read from. For the CISO or director sponsoring the programme, the SecPortal for CISOs page covers how the programme's outcomes roll up into leadership reporting.

Pair the programme with adjacent enterprise workflows. The DevSecOps enterprise guide covers the CI/CD-side automation that complements the human champion in the team. The multi-team security operations guide covers the centralised, federated, and hybrid models the programme has to fit inside. The vulnerability management program guide covers the upstream and downstream workflow the champion plugs into.

Six-Month Launch Plan

For internal security leaders launching a champions programme, here is a workable six-month sequence. It is deliberately conservative: a programme that survives is worth far more than a programme that launches loudly and quietly stops in the second quarter.

  1. Month one: write the role description, the operating outcomes, the metrics, and the training curriculum. Get sign-off from engineering leadership and the central security function. Pick the first three pilot teams.
  2. Month two: recruit champions on the pilot teams. Run the foundations chunk of the curriculum. Wire the engagement record, the RBAC scoping, and the workspace MFA. Walk every champion through the operating record before they own anything.
  3. Month three: begin the handoff. Pilot champions take ownership of triage on their teams under shadowing. The first syndicate meets. Track the latency metric from day one so the central function has a baseline.
  4. Month four: expand to first-pass code review and lightweight threat modelling. Add three more teams to the pilot. Run the first programme review with the metrics you have, even if the data is thin.
  5. Month five: fold the audit-evidence, training records, and compliance mapping into the engagement record. Run a calibration pass on severity decisions across champions to surface drift.
  6. Month six: roll out to the next wave of teams. Promote the first cohort of champions to mentor the new ones. Run the first quarterly programme review with leadership and the first annual-style content review on the curriculum. Decide which operating outcomes graduate from tracking to committed for the next half.

Run the Programme on a Single Record

The single biggest operational lever a security champions programme has is recordkeeping. If the central function, the champion, and the engineer fixing the issue all read the same finding from the same record, the programme runs itself most of the time. If they each operate from a different record, the programme spends most of its energy reconciling. SecPortal is designed around that single record: findings management with CVSS 3.1 calibration, engagement management for the team-scoped scope, RBAC for champion access tiers, the read-only portal view for engineers, document management for training material, the activity log for the audit trail, and compliance tracking for the audit-evidence pack.

None of those features replace the role of a real champion. They make sure the work the champion does is visible to the central function, defensible to an auditor, and continuous across team rotations.

Run your security champions programme on SecPortal

Stand up the programme's operating record in under two minutes. Free plan available, no credit card required.