Use Case

New application security onboarding
so every new service enters the programme with a baseline, not a gap

New applications enter the estate constantly: a new product line, a spin-out service, a new vendor-acquired component, a SaaS tenant launched off the platform, or a repo a team forked into a new service. Most security programmes meet these applications later than they should, after a scanner, an audit, or a customer questionnaire surfaces the gap. Run new application security onboarding as a structured workflow on the engagement record so every new service enters the programme with a documented baseline (threat model, code scan, baseline DAST, owner mapping, evidence trail) rather than appearing in the backlog months after launch with findings already aged.

No credit card required. Free plan available forever.

Run new application security onboarding on the engagement record

New applications enter the estate constantly. A new product line ships, a spin-out service launches, a vendor-acquired component absorbs into the platform, a repository forks into a new internal tool, or a merger pulls in someone else's backlog. Most security programmes meet these applications later than they should, after a scanner picks them up, an audit asks for evidence, or a customer questionnaire surfaces the gap. By then the early decisions (authentication design, data classification, logging coverage, dependency posture) are baked into the codebase and the launch traffic. SecPortal runs application onboarding as a structured workflow on a single engagement record so every new service enters the programme with a documented baseline (threat model, code scan, baseline DAST, owner mapping, evidence trail) rather than appearing in the backlog months after launch with findings already aged.

This is the upfront workflow that introduces a new application to the security programme. For the steady-state cadence the application transitions into after intake closes, read the devsecops scanning workflow. For the SDLC stage-gate handoff once the application is in flight, read the SDLC vulnerability handoff workflow. For point-in-time review against a specific change set, read the code review workflow. For the broader programme picture, read the security testing programme management workflow. For the threat-modelling fundamentals the intake review applies, read the threat modelling guide.

Six failure modes that quietly break new application onboarding

Every onboarding failure that produces aged findings, audit gaps, or post-launch surprises traces back to one of the same six modes. Each one is invisible at intake time and visible at the next scan, the next audit, or the next leadership read.

The application reaches production before security meets it

A new product line, a spin-out service, or a vendor-acquired component goes live without a registered owner on the security side. The first time the security team sees the application is when an external scanner picks it up, an audit asks for evidence, or a customer security questionnaire surfaces it. By then the easy intake decisions (authentication design, data classification, logging coverage, dependency posture) are baked into the codebase and the launch traffic.

No threat model, so the early decisions are invisible

Without a threat model produced as part of intake, the early architectural decisions (where authentication sits, which trust boundaries exist, which dependencies handle sensitive data) are never recorded as security context. Six months later, when a finding lands on a downstream service, nobody can reconstruct whether the design was reviewed or whether the gap was assumed.

Repository, domain, and credential posture are unverified at intake

Scanning a service whose repository is not connected, whose domain is not verified, and whose authentication credentials are stored in a shared password manager produces unreliable evidence. The scan misses files behind authentication, targets domains the workspace does not control, and leaks credentials across triage seats. Verifying repository, domain, and credential posture is a prerequisite to producing baseline evidence the audit can defend.

Severity calibration is debated at every triage rather than set at intake

When intake skips severity calibration, every subsequent triage cycle re-debates whether a SAST finding on this application is truly high, whether a DAST finding on this domain is truly critical, and whether a dependency advisory carries the same weight here as on adjacent services. Calibrating severity once at intake (with environmental context, data classification, and exposure baked in) makes downstream triage a recalibration event rather than a fresh argument.

No owner map, so findings have nowhere to route

Onboarding without recording named owners on the engineering side means the first set of findings hits a generic queue with no clear remediation path. By the time ownership is sorted out, the SLA clock has already started running and the team that should have been involved at intake is now triaging legacy work rather than landing fixes against fresh context.

Intake closes implicitly when the launch ships

Without a documented intake closure rule (which severities have to close before launch, which can carry an exception, which carry forward to steady-state SLA), launch readiness becomes a hallway negotiation under deadline pressure. The intake decisions are unrecorded, the residual risk is uncited, and the next audit reads launch as an event without a documented gating step.

Six fields every intake engagement has to record

A defensible intake is six concrete fields on the engagement record, not a free-form kickoff document a security lead writes once and never references again. Anything missing from the list below is a known gap in the intake trail rather than something an assessor will spot in time.

Application metadata

The application name, the owner team, the environment (production, staging, internal-only), the public exposure (internet-facing, partner-only, internal), the data classification, and the launch timeline. The metadata anchors every downstream decision (severity calibration, scan cadence, audit relevance) so it is captured first, not at the end.

Asset references

The repository or repositories the application is built from, the primary and ancillary domains, the dependent services and integrations, the cloud accounts the application runs in (where the security team is the cross-cutting record), and the data stores the application reads from or writes to. Every reference sits on the engagement record so scanning, ownership, and closure all key off the same registry.

Owner assignment

The named asset owner on the engineering side (the team or named lead responsible for fixes) and the named security owner on the workspace side. Ownership lands at intake so the first set of findings has somewhere to route. Generic team mailboxes do not satisfy this field; an engagement without a named human is an engagement findings will not actually close on.

Threat model summary

The intake threat model captures the architecture, the trust boundaries, the authentication surfaces, the data flows, and the third-party dependencies. The summary lives on the engagement as a document or finding bundle rather than in a one-off slide deck the security team cannot find six months later when the next finding lands on the same application.

Baseline scan plan

The plan for baseline code scanning (SAST plus dependency analysis on the connected repository), authenticated DAST against staging or production, and external scanning against the verified domain. The plan sets the scope, the cadence, and the expected runtime so intake produces a comparable baseline across applications rather than ad hoc evidence per service.

Intake closure rules

The criteria that have to be met before the engagement transitions to steady-state: which severities have to close before launch, which can carry a documented exception with compensating control, and which carry forward under the standard SLA. The rules sit on the engagement so the launch decision is reproducible rather than negotiated.

New application onboarding checklist

Before any new application transitions from intake to steady-state, the security lead, the engineering owner, and the platform or SRE owner walk through a short checklist. Each item takes minutes; missing any one of them is the source of the failure modes above and the audit gaps that follow.

  • A new application, service, or repository entering the estate triggers an onboarding engagement on the workspace with a fixed intake scope.
  • The application metadata (name, owner team, environment, public exposure, data classification, launch timeline) is captured on the engagement before scanning starts.
  • The repository is connected via GitHub, GitLab, or Bitbucket OAuth so code scanning can run against the right branch.
  • The primary domain is verified through DNS TXT or meta-tag verification so external scanning targets only assets the workspace controls.
  • A named asset owner on the engineering side and a named security owner on the workspace side are recorded on the engagement.
  • The intake threat model is produced and attached to the engagement record as a document or as an organised set of findings.
  • Baseline code scanning runs against the connected repository (SAST plus dependency analysis) before the first release lands.
  • Baseline authenticated DAST runs against staging or production with credentials encrypted at rest in the workspace.
  • Baseline external scanning runs against the verified domain to surface header, TLS, and exposure issues.
  • Intake findings receive a calibrated severity using CVSS 3.1 plus environmental context (reachability, exposure, data classification, blast radius).
  • Each intake finding has a named remediation owner on the engineering side recorded on the finding itself.
  • The intake closure rules are documented on the engagement: which severities have to close before launch, which can carry an exception, which carry forward to steady-state SLA.
  • Findings that close inside an exception carry the documented compensating control with named rule, segment, or query reference.
  • The activity log captures every state change against the intake findings with timestamp and user attribution.
  • When the closure rules are met, the engagement transitions to steady-state with continuous monitoring schedules covering external, authenticated, and code scanning.
  • The intake evidence pack is exported for the launch readiness review and retained on the engagement record for audit reads.

How application onboarding looks in SecPortal

Onboarding is one workflow stitched into seven capability surfaces: the engagement record, the repository connection, the verified domain, the encrypted credential store, the scan executions, the findings record, and the AI-generated intake report. The work that has to happen at each stage is the same work the platform already supports for everyday operations; the onboarding layer just makes the metadata, the threat model, the baseline scan output, the severity calibration, and the closure rules explicit on a single intake engagement.

Engagement as the intake record

The intake metadata, asset references, named owners, threat model, baseline scans, and closure decisions all sit on the same engagement record. Every onboarding for every application uses the same record schema so the intake picture is comparable across services rather than ad hoc per launch.

Repository connected at intake

The application repository is connected through GitHub, GitLab, or Bitbucket OAuth via repository connections so code scanning can run SAST through Semgrep and dependency analysis on the right branch as the first baseline event.

Domain verified before scanning

The primary domain is verified through DNS TXT or meta-tag verification on the domain verification workflow so external scanning targets only assets the workspace controls. Verification has to land before the scan starts.

Authenticated DAST with stored credentials

Baseline authenticated scanning runs against staging or production with the credentials encrypted at rest in the workspace through encrypted credential storage, so the team stops circulating session cookies and bearer tokens through shared password managers at intake time.

Findings on one record

Intake findings from threat modelling, code scanning, authenticated DAST, and external scanning land on a single findings record with CVSS 3.1 vectors, severity ratings, evidence, and remediation guidance. The intake queue is one queue rather than four parallel ones the security lead has to reconcile by hand.

Continuous monitoring on transition

When the intake closure rules are met, the engagement transitions to steady-state. Continuous monitoring schedules cover daily, weekly, biweekly, or monthly runs across external, authenticated, and code scanning so the application stays under coverage rather than dropping into a gap between launch and the first scheduled audit.

Activity log as intake audit trail

Intake decisions, severity calibrations, owner assignments, exception decisions, and closure events land on the activity log with timestamp and user attribution. The CSV export is the audit evidence ISO 27001, SOC 2, PCI DSS, and NIST assessors expect for new application onboarding.

AI-generated intake report

The intake engagement record produces a launch readiness narrative through AI report generation. Leadership reads the open backlog by severity, the closure rate, the exception register, and the residual risk picture against the application's data classification without the security lead authoring the narrative from scratch under deadline pressure.

Owner map carries to steady-state

The asset-to-owner map established at intake carries forward into the asset ownership mapping for findings workflow so the first scheduled scan after intake routes findings to the same humans the intake engagement assigned. Ownership does not get rebuilt from scratch on every cycle.

Severity calibration at intake

Severity calibration sits at the centre of the intake. A SAST finding on a public service with regulated data is not the same as the same finding on an internal-only experiment. A DAST finding on an internet-facing path is not the same as the same finding on an admin-only path. A dependency advisory on a transitively-imported package is not the same as the same advisory on a directly-imported package handling the authentication flow. Calibrating once at intake (with the application metadata, data classification, exposure, and reachability already on the engagement) means downstream triage is recalibration against new evidence rather than a fresh debate from first principles. The calibration method aligns with the broader vulnerability prioritisation workflow so the intake severities mean the same thing as the steady-state severities the team operates against later.

What auditors expect from new application onboarding

Application onboarding evidence shows up in audit reads whenever an external assessor reviews how new systems enter the production environment. The frameworks below all expect a documented intake plus enforcement evidence rather than a slide that names the cadence without proving it.

FrameworkWhat the audit expects
ISO 27001:2022Annex A 8.25 (secure development lifecycle), 8.26 (application security requirements), 8.27 (secure system architecture and engineering principles), and 8.28 (secure coding) all expect a documented application onboarding process plus enforcement evidence. Recording the intake metadata, threat model, baseline scans, severity calibration, owner assignment, and closure decisions on the engagement record produces the evidence directly; a launch ticket alone reads as a process gap.
SOC 2Common Criteria CC6.1 (logical and physical access), CC7.1 and CC7.2 (system operations and change), and CC8.1 (change management) expect documented onboarding for systems entering the production environment. The intake engagement record (metadata, owner, threat model, baseline scans, closure decisions) is the artefact CC6.x, CC7.x, and CC8.1 audits read for new application onboarding.
PCI DSSRequirements 6.2 (custom and bespoke software developed securely), 6.3 (security vulnerabilities identified and addressed), and 11.3 (vulnerabilities identified and addressed periodically) all expect documented intake for in-scope systems. Pairing the threat model, the baseline scans, the severity calibration, and the closure decisions to the intake engagement satisfies the documentary trail the assessor expects for newly introduced applications and services.
NIST SP 800-53Controls SA-3 (system development life cycle), SA-8 (security and privacy engineering principles), SA-11 (developer testing and evaluation), and CA-2 (control assessments) expect documented intake before authorisation to operate. The onboarding engagement record provides the assessment evidence as derivative artefacts of the work rather than as a parallel artefact assembled at audit time.
NIST SSDF (SP 800-218)PO.1 (define security requirements for software development), PO.3 (implement supporting toolchains), PS.1 (protect all forms of code), PW.1 (design software to meet security requirements), and PW.7 (review and analyse code) all map to the intake workflow. Recording the threat model, baseline code and dependency scans, severity calibration, and closure decisions on the engagement aligns the SSDF practice expectations to operating evidence rather than to process documentation alone.

Where application onboarding fits across the security lifecycle

Onboarding sits at the front of the lifecycle and feeds the rest. Every downstream workflow inherits the intake record so the application enters steady-state with a clean baseline rather than a forensic exercise.

Upstream and adjacent

Onboarding depends on repository connections and domain verification for asset registration, on asset criticality scoring for the data-classification-aware severity calibration, and on asset ownership mapping for the named-owner registry. The intake engagement is where these inputs first converge for a given application.

Downstream and steady-state

Onboarding feeds devsecops scanning for the steady-state cadence, scanner result triage for ongoing triage discipline, SDLC vulnerability handoff for stage-gate transit, dependency vulnerability triage for SCA findings landing post-launch, and secret scanning remediation when intake or post-launch scans surface exposed credentials.

Pair the workflow with the framework references and the threat-modelling guide

Onboarding is operational; the surrounding references explain the cadence, the threat-modelling discipline, and the framework clauses that mandate documented intake. Pair this workflow with the threat modelling guide for the intake review method, the devsecops enterprise guide for the steady-state programme context, and the security champions programme guide for the engineering-side ownership model the intake engagement assumes. The framework references that mandate documented application onboarding include ISO 27001 for secure development lifecycle and application security requirements, SOC 2 for change-management and system-operations criteria, PCI DSS for secure software requirements, NIST SP 800-53 for SA-3, SA-8, SA-11, and CA-2, and OWASP SAMM and OWASP ASVS for the secure architecture and verification practices the intake review applies.

Buyer and operator pairing

New application onboarding is the workflow AppSec teams run as the front gate of the secure development lifecycle, the workflow product security teams run when a new product line or service line registers, the workflow platform engineering teams run when their abstraction produces a new tenant or service that needs its own intake, and the workflow security engineering teams run as the cross-cutting record between application owners and the security programme. CISOs and security leaders read the intake cadence, the launch-readiness backlog, and the exception register as headline indicators of how disciplined the front gate is, and GRC and compliance teams read the intake engagement record directly when ISO 27001, SOC 2, PCI DSS, and NIST audits ask for application onboarding evidence.

What good application onboarding feels like

Intake is a documented gate, not a hallway negotiation

The launch decision rests on a documented intake closure rule (which severities have to close, which can carry an exception, which carry forward) recorded on the engagement rather than negotiated under deadline pressure on a release-day call.

Owners are named before scanning starts

The asset owner on the engineering side and the security owner on the workspace side land on the engagement before the first scan event. Findings have somewhere to route from the moment they open.

Threat model and scans share one record

The intake threat model, the baseline code scan, the baseline DAST, and the baseline external scan all sit on the same engagement so the architectural decisions and the automated findings stay queryable side by side rather than living in different systems.

Evidence is derivative of the work

Launch-readiness reports, audit evidence, and leadership reads all derive from the live intake engagement. Nobody reconciles intake evidence the week before the audit; the activity log export is the trail.

New application security onboarding is the discipline that turns the launch of every new service into a documented intake event rather than a moment the security programme finds out about months later. Run it on the live engagement record, and every application enters steady-state with a clean baseline the security team, the engineering team, and the audit all read from the same record. Where the platform engineering team owns a paved-road contract that wires intake into a tier classification, scanner coverage, build provenance, deployment gates, and secure-by-default service templates, pair this workflow with the internal developer platform security guardrails workflow so registration is the first step on the paved road rather than a separate intake the platform contract never reads.

Frequently asked questions about new application security onboarding

What is new application security onboarding?

New application security onboarding is the structured intake workflow that runs every time a new application, service, repository, or domain enters the estate. It captures the application metadata, the asset references, the named owners, the threat model, the baseline scan output, the severity calibration, and the intake closure rules on a single engagement record so the application enters steady-state with a documented baseline rather than appearing in the backlog months after launch with findings already aged. SecPortal runs the workflow on the engagement record so every intake decision (architecture, owner, scan output, closure rule) sits on the same artefact the application will be operated against later.

How is application onboarding different from devsecops scanning, code review, and SDLC handoff?

DevSecOps scanning is the steady-state cadence of code scanning that runs continuously on the application after intake closes. Code review is a deeper point-in-time review against a specific change set or branch. SDLC vulnerability handoff is the governance of how findings transit between SDLC stages once the application is in flight. Application onboarding is the upfront workflow that introduces a new application to the security programme with a documented baseline. All four run on the same engagement model; onboarding is the one that runs once per new application, before steady-state begins.

What kinds of new applications need onboarding?

New product lines, spin-out services, vendor-acquired components, SaaS tenants launched off an internal platform, repositories forked into a new service, internal tools that grow into externally facing applications, and applications inherited through a merger or acquisition all qualify. The trigger is not the engineering team that built the application; it is the moment the application acquires its own asset identity (a repository, a domain, a deployment target) that the security programme has to monitor as a distinct service.

Why is the intake threat model captured on the engagement record?

A threat model that lives in a slide deck or in a one-off document store loses signal the moment the next finding lands. Capturing it on the engagement record (as an attached document, as a structured set of intake findings, or as both) means the architectural assumptions stay queryable when downstream findings reference them. Six months later, when a SAST finding lands on a module the threat model marked as a trust boundary, the security record connects the finding to the original design assumption rather than treating the two as unrelated artefacts.

What baseline scans does onboarding actually run?

Three: baseline code scanning (SAST through Semgrep and dependency analysis on the connected repository), baseline authenticated DAST against staging or production with credentials stored in encrypted credential storage, and baseline external scanning against the verified domain. The three baselines produce a comparable starting picture across applications. Code surfaces injection and dependency issues at source. Authenticated DAST surfaces logic and configuration issues against the running application behind authentication. External scanning surfaces header, TLS, and exposure issues on the public surface. Together they form the intake finding queue the team triages before the engagement transitions to steady-state.

What does ownership assignment look like during onboarding?

A named asset owner on the engineering side and a named security owner on the workspace side land on the engagement record before scanning starts. The asset owner is the team or named lead responsible for landing fixes; the security owner is the workspace user responsible for triaging intake findings, calibrating severity, and signing off on closure. Generic team mailboxes do not satisfy this field. The asset-to-owner mapping carries forward into steady-state so the first scheduled scan after intake routes findings to the same humans the intake engagement assigned.

How are intake closure rules different from steady-state SLA policy?

Steady-state SLA policy applies to the full backlog under continuous monitoring once the application is live. Intake closure rules are a narrower set of gating rules that have to be met before the engagement transitions from intake to steady-state. The intake rules typically require critical and high findings to close before launch, allow medium findings to close inside a documented exception with compensating control, and let low findings carry forward under the standard SLA. The rules sit on the engagement record so launch readiness is a documented gate rather than a hallway negotiation.

How does onboarding handle exceptions for findings that cannot close before launch?

The vulnerability acceptance and exception management workflow records the documented decision: the linked finding, the severity, the compensating controls, the residual likelihood, the residual impact, the business rationale, the expiry date, and the review cadence. An intake exception is the same decision schema applied at the launch gate: the finding stays open, the compensating control sits on the record with named rule or segment reference, and the exception carries an expiry that triggers the next review automatically rather than persisting silently.

How does the intake engagement transition into steady-state monitoring?

When the intake closure rules are met, the engagement transitions to steady-state. Continuous monitoring schedules cover daily, weekly, biweekly, or monthly runs across external, authenticated, and code scanning so the application stays under coverage between events. The intake evidence trail (threat model, baseline scans, severity calibrations, owner mappings, closure decisions, activity log) remains attached so the next scheduled scan, the next audit, and the next leadership read all reference the same baseline. Steady-state operates against the intake record rather than against a fresh empty queue.

How does SecPortal support application onboarding without a Jira or ticketing integration?

SecPortal does not integrate directly with external ticketing systems. The onboarding workflow holds the intake metadata, asset references, threat model, baseline scans, severity calibrations, owner mappings, and closure decisions on the engagement record so the security side carries the canonical evidence trail. Where an engineering ticket also tracks remediation work, the ticket reference is captured against the finding so the audit can cross-reference; the security record remains the authoritative source for the intake decision and the closure trail.

How it works in SecPortal

A streamlined workflow from start to finish.

1

Trigger an onboarding engagement when the application is registered

When a new application, service, repository, domain, or product line is registered with the security team, an onboarding engagement is opened on the workspace with a fixed scope: capture the application metadata (name, owner team, environment, public exposure), the asset references (repository, primary domain, dependent services), the data classification, and the launch timeline. The engagement carries the intake from registration through baseline review and into steady-state monitoring rather than living in a one-off ticket that closes once the kickoff call is done.

2

Capture asset references and verify ownership before scanning starts

The repository connection is established through the GitHub, GitLab, or Bitbucket OAuth flow so the code scan can run against the right branch. The primary domain is verified through DNS TXT or meta-tag verification so external scanning targets only assets the workspace controls. The named asset owner on the engineering side and the named security owner on the workspace side are recorded on the engagement so the intake handoff is explicit. Ownership has to land before scanning starts, otherwise the first set of findings has nowhere to route.

3

Run a threat model and intake review against the application context

The threat model captures the application architecture, the data flows, the trust boundaries, the authentication surfaces, and the dependencies on external services. Intake findings (missing authentication on an admin path, a third-party dependency with a known weak posture, a misconfigured cloud bucket the application reads from, a logging gap) are recorded as findings on the engagement with severity, evidence, and a remediation owner. The threat model output lives on the same record as the scan output and the operational findings the application will produce later, so the early-stage decisions stay queryable when the application reaches production.

4

Run baseline code, dependency, and DAST scans before the first release

Code scanning runs SAST through Semgrep and dependency analysis against the connected repository. Authenticated DAST runs against staging or production with the credentials encrypted at rest in the workspace. External scanning runs against the verified domain to surface header, TLS, and exposure issues. The three baseline scans produce the intake finding queue: a CVSS-scored, evidence-rich set of findings the team triages before launch rather than discovering after the application is live and the customer surface is already exposed.

5

Calibrate severity, assign owners, and set the intake closure rules

Each intake finding receives a calibrated severity (CVSS plus environmental context: reachability in the build, exposure in production, blast radius given the data classification) and a named remediation owner on the engineering side. The intake policy defines the closure rules: which severities have to close before launch, which can close inside an exception with a documented compensating control, and which carry forward into steady-state monitoring under the standard SLA. The rules sit on the engagement so the launch decision is reproducible rather than negotiated under deadline pressure.

6

Hand the application over to steady-state monitoring after intake closes

Once the intake closure rules are met (critical and high findings closed or carrying a documented exception, baseline scans clean, owner mapping in place, threat model attached), the engagement transitions to steady-state. Continuous monitoring schedules cover daily, weekly, biweekly, or monthly runs for external, authenticated, and code scanning. The application enters the regular triage cadence with the intake evidence trail intact, so the next scan, the next audit, and the next leadership read all reference the same baseline the onboarding engagement produced.

7

Use the intake evidence pack as audit and leadership artefact

The onboarding engagement record (intake findings, threat model, baseline scans, severity calibrations, owner assignments, closure decisions, activity log) becomes the evidence pack for ISO 27001, SOC 2, PCI DSS, and NIST audits asking for documented application onboarding. AI-generated reports turn the record into a launch-readiness narrative for leadership. The activity log export covers every state change with timestamp and user attribution so the audit reads the intake as a documented programme rather than as an asserted process.

Run new application security onboarding on the engagement record

Threat model, baseline code and DAST scans, owner mapping, and intake evidence on one engagement before launch. The application enters steady-state with a clean baseline. Start free.

No credit card required. Free plan available forever.