OWASP SAMM
a maturity-model view of the Software Assurance Maturity Model
OWASP SAMM (Software Assurance Maturity Model) is the open framework that measures software security maturity across five business functions and fifteen security practices, on a three-level scale. Run SAMM assessments as structured records, score each practice, build an improvement roadmap, and re-score over time so the maturity claim is a record rather than a one-off slide.
No credit card required. Free plan available forever.
OWASP SAMM: a maturity model for the software security programme, not a vulnerability list
OWASP SAMM (Software Assurance Maturity Model) is the open framework that measures software security maturity across five business functions and fifteen security practices, on a three level scale. Where the OWASP Top 10 ranks the most critical risks and OWASP ASVS verifies what a secure web application looks like, SAMM measures the organisation that produces the application. SAMM is published by the Open Worldwide Application Security Project, is freely available under a Creative Commons licence, and is the model most product organisations reach for when they need a vendor-neutral way to describe and improve the maturity of their software security programme.
SAMM is most useful when an organisation needs to talk about software security maturity in the same way across engineering, leadership, audit, and the buyer side. A statement like “our programme is mature” is unfalsifiable. A statement like “we score Stream A at Level 2 and Stream B at Level 1 in Verification, with a target of 2 across both streams in twelve months” is operational. SAMM is the language that makes that operational statement possible. The OWASP framework reference covers the headline risk taxonomy; this page covers the maturity model that sits underneath the programme that addresses those risks.
The five business functions and the fifteen security practices
SAMM organises the software security programme into five business functions, each containing three security practices. The fifteen practices are the unit of assessment: each carries a maturity score, a target, an owner, and a roadmap. The business functions are the unit of communication: leadership cares about the function-level posture, engineering cares about the practice-level work.
Governance
Practices: Strategy and Metrics, Policy and Compliance, Education and Guidance
The strategic layer of the software security programme. Governance covers how the programme defines goals, ties them to policy and regulatory drivers, and equips the people who deliver software with the training and reference material that make secure delivery a default. Governance scores low when the programme exists on a slide deck and high when measurable goals, named owners, and budgeted training are visible to engineering.
Design
Practices: Threat Assessment, Security Requirements, Security Architecture
What is decided before code is written. Design covers threat modelling, the translation of risk and regulatory drivers into security requirements, and the architecture patterns and supplier expectations that prevent each new component from starting the conversation over. Design scores low when threat models do not exist outside compliance audits and high when reusable patterns and a vetted requirement set are part of every new feature.
Implementation
Practices: Secure Build, Secure Deployment, Defect Management
The act of producing and shipping software. Implementation covers build pipeline integrity, the path from artefact to production, and the workflow that intakes, triages, and resolves security defects with consistency. Implementation scores low when defects are tracked in spreadsheets that drift from reality and high when the same intake, triage, and closure record carries security defects from discovery to verified fix.
Verification
Practices: Architecture Assessment, Requirements-driven Testing, Security Testing
What is checked before and during release. Verification covers the audit of design decisions against threat assumptions, the turning of security requirements into executable tests, and the breadth of automated and manual testing (DAST, SAST, SCA, pentest) that exercises the live system. Verification scores low when testing is ad hoc and high when each requirement carries the test that exercises it and the evidence that the test passed.
Operations
Practices: Incident Management, Environment Management, Operational Management
What happens after release. Operations covers the detection, response, and learning loop for security events, the hardening and patch posture of the production estate, and the data lifecycle obligations that come with running the system. Operations scores low when incident detection and response are improvised and high when the runbooks, the on-call rota, and the post-incident review are part of the same workflow as the rest of the programme.
For programmes that run authenticated DAST, SAST, and SCA as part of Verification, the authenticated scanning and code scanning features run those tests against the same engagement record that carries the SAMM scorecard, so Verification scores cite live evidence rather than a screenshot of a dashboard.
The three maturity levels: pick the target on purpose
SAMM scores each practice on a three-level scale. The level is a deliberate target, not a race: programmes are explicitly allowed (and encouraged) to set mixed targets across practices. A consumer product team and a financial services platform team in the same company can legitimately target different maturity levels in the same practice, because the risk surface and the regulatory weight are different.
Level 1 (initial)
Initial understanding and ad hoc provision. The activity exists in some form, in some teams, sometimes. There is intent and there are pockets of capability, but the practice is not consistent across the engineering organisation. Level 1 is honest where Level 0 (not assessed) and Level 1.5 (overstated) are not. The roadmap from Level 1 is mostly about consistency, ownership, and frequency.
Level 2 (defined)
Increased efficiency and effectiveness. The activity is consistent across the engineering organisation, has named owners, has measurable inputs, and is documented well enough that a new team can run it without one-on-one onboarding. Level 2 is the maturity bar most product organisations target for the practices that matter to them, because Level 3 is expensive and Level 1 is fragile.
Level 3 (optimised)
Comprehensive mastery at scale. The activity is measured, optimised, and integrated with the rest of the programme. Level 3 is appropriate where regulation, customer commitment, or business risk justifies the cost. Treat Level 3 as a deliberate decision, not a default; SAMM is explicitly compatible with mixed maturity targets across practices.
Scoring mechanics: the toolbox, the streams, the scorecard
SAMM ships with a structured toolbox: one interview question set per practice, mapped to the maturity levels and the activity streams. The score is derived from the toolbox answers, with evidence captured per answer so the score is defensible. Scoring is the durable artefact; the slide deck that summarises it is not.
- Each of the fifteen security practices is scored on the three-level scale (Level 1, Level 2, Level 3) using the SAMM toolbox question set: one set of structured interview questions per practice, with the answers driving the score
- SAMM 2.0 splits every practice into two activity streams (Stream A and Stream B) so the score reflects breadth and depth: a practice can score 2A and 1B if the breadth activity is strong but the depth activity is still ad hoc
- Scores are roll-up averages, not single numbers; the scorecard reports per-practice scores per stream, business-function averages, and the overall maturity claim, with the underlying answers retained for audit
- Evidence is the difference between a defensible score and a self-reported one; capture the artefact, owner, and date for each toolbox answer rather than carrying the score forward without the source
- Target scores are recorded next to current scores per practice so the gap is explicit, the roadmap is clear, and the next assessment cycle starts from the target rather than the previous score
- Gap rationale is captured for each practice the assessment chose not to lift to the next level, so the absence of activity is a deliberate decision rather than an oversight; SAMM does not require all fifteen practices to move together
- Cadence is recorded on the engagement: full reassessment annually is the SAMM default, with quarterly progress checks on the subset of practices that are actively being lifted, so the maturity claim degrades gracefully if a cycle is skipped
For deeper context on how the underlying findings discipline supports SAMM Verification and Implementation evidence, see the findings management workflow and the security findings deduplication guide, which together cover the data hygiene that turns scanner output into defensible Verification evidence.
How SAMM sits next to ASVS, BSIMM, NIST SSDF, and ISO 27001
SAMM is rarely used in isolation. It is the programme-level maturity layer that other frameworks depend on for context, or that wrap around it for a different audience. The contrast below is a working view, not a buyer comparison: the practitioner question is which frameworks to pair SAMM with, not which to pick instead of it.
SAMM vs OWASP ASVS
OWASP ASVS is a verification standard that says, requirement by requirement, what a verified secure web application looks like at Level 1, Level 2, or Level 3. OWASP SAMM is a maturity model that says, practice by practice, how mature the software security programme is that produces those applications. ASVS verifies a thing; SAMM measures the organisation that builds the thing. Most mature programmes use both: ASVS as the application-level verification claim, SAMM as the programme-level maturity claim, and the two compose because Verification (one of the five SAMM business functions) literally cites the verification standard the programme tests against.
SAMM vs BSIMM
BSIMM (Building Security In Maturity Model) is a benchmark study that observes what mature programmes actually do and reports the activity prevalence per practice across a community of participating organisations. SAMM is a prescriptive maturity model that says what a programme should do at each level. BSIMM tells you what your peers do; SAMM tells you what good looks like. Some programmes run both: SAMM as the internal target operating model, BSIMM as the comparative benchmark when buyers, regulators, or boards ask how the programme compares to industry.
SAMM vs NIST SSDF
NIST SP 800-218 (the Secure Software Development Framework, SSDF) is a regulator-facing checklist of practices that secure software producers are expected to implement, particularly for federal procurement. SAMM is a vendor-neutral, regulator-agnostic maturity model that pre-dates SSDF and covers the same surface from a different angle. SSDF gives a binary practice-implemented statement; SAMM gives a per-practice maturity score. Programmes operating under federal procurement obligations typically use SSDF as the checkbox view and SAMM as the maturity view, because the SSDF practices map cleanly to SAMM Implementation, Verification, and Operations.
SAMM vs ISO 27001 Annex A
ISO 27001 Annex A is the controls catalogue of an information security management system, covering organisational, people, physical, and technological controls. SAMM is narrower (software security only) and deeper (per-practice maturity rather than control-implemented yes or no). The Annex A controls relevant to software (A.5 governance, A.6 people, A.8 technological including A.8.25 secure development lifecycle, A.8.26 application security requirements, A.8.27 secure system architecture and engineering principles, A.8.28 secure coding) map cleanly into SAMM business functions, so a SAMM assessment can be the technical evidence layer for the relevant Annex A controls in an ISMS audit.
Programmes operating under regulated procurement frameworks should pair SAMM with ISO 27001 Annex A.8 (technological controls covering secure development) where the ISMS is in scope, with SOC 2 Common Criteria where SaaS audit evidence is needed, and with PCI DSS Requirement 6 where payment data is in scope. The maturity claim travels across audits when the underlying evidence is the same; the auditor view is a filter on the same record, not a separate workspace.
SAMM for AppSec teams, DevSecOps programmes, and pentest firms
SAMM is read differently depending on which side of the engagement you sit on. AppSec teams use SAMM as the programme-level scorecard that ties the requirements they own (Design, Verification, parts of Implementation) into the wider engineering operating model. DevSecOps programmes use SAMM as the operating model that calibrates pipeline gates, environment hardening, and incident response, especially across the Implementation and Operations business functions. Pentest firms running maturity assessments for clients use SAMM as the structured scope of the assessment and as the language the deliverable speaks, so the buyer gets a defensible scorecard rather than a generic gap analysis.
The persona-specific entry points are SecPortal for application security teams, SecPortal for DevSecOps teams, and SecPortal for security consultants. Each anchors a different view of the same SAMM assessment record.
The roadmap: the deliverable that justifies the assessment
A SAMM assessment is not the deliverable. The scorecard names the current state. The deliverable is the gap between current and target maturity, expressed as a roadmap that names the practices to lift, the activities that lift them, the owner, and the timeline. The roadmap is what gets funded, what gets reviewed, and what carries over into the next cycle. The standard cadence is annual full reassessment, with quarterly progress checks on the subset of practices that are actively being lifted.
For deeper context on translating findings into structured remediation work that supports SAMM Implementation (Defect Management) scores, see the remediation tracking workflow and the vulnerability prioritisation framework, which together cover the triage and closure mechanics that distinguish a Level 1 defect management practice from a Level 2 one. For the Design business function (Threat Assessment practice), the threat model template is the copy-ready STRIDE artefact that lifts the practice from Level 1 (ad hoc threat modelling on individual systems) to Level 2 (per-application threat assessment with named reviewers, mitigation decisions, and verification evidence) by giving every system the same eight-section structure and review cadence.
Where SecPortal fits in a SAMM assessment cycle
SecPortal is the operating layer for a SAMM assessment cycle. The platform handles scope, toolbox answers, evidence, scorecard, target setting, roadmap, and re-scoring on cadence, so the assessment runs as a single record rather than a long email thread with attached spreadsheets. For consultancies running SAMM assessments on behalf of multiple clients, the security consulting workspace bundles the SAMM record with branded client portals, so each client sees their own live scorecard rather than a frozen PDF.
- Engagement management captures the SAMM assessment as a structured record covering the in-scope business functions, the practices, the toolbox answers, the per-stream scores, and the target maturity, so the assessment is reproducible at the next cycle rather than reconstructed from a slide deck
- Findings management stores Verification practice evidence (DAST, SAST, SCA, manual review, pentest results) on the same workspace as the SAMM scorecard, with each finding carrying a CVSS 3.1 vector, owner, evidence, and OWASP Top 10 mapping where applicable, so Verification scores have evidence behind them
- Authenticated scanning runs DAST behind the login screen against the same engagement record, with credentials stored encrypted at rest under AES-256-GCM (cookie, bearer, basic, or form-login), supporting Verification (Security Testing) and Implementation (Defect Management) evidence on a recurring schedule
- Code scanning integrates SAST and SCA against connected Git repositories under the OAuth connection, attaching findings to the same engagement, so the Implementation (Secure Build) and Verification (Security Testing) practices score on live pipeline output rather than spreadsheet self-assessment
- AI-assisted reports compose the SAMM assessment summary, the per-practice analysis, and the improvement roadmap from the underlying engagement, scorecard, and findings, so the deliverable cites the toolbox answers and target gaps rather than starting from a blank template
- Compliance tracking lets one SAMM assessment feed framework mappings to ISO 27001 Annex A.8, SOC 2 Common Criteria, PCI DSS Requirement 6, and NIST CSF Identify and Protect functions, so the maturity record doubles as audit evidence without rebuilding the bundle for each auditor
- Continuous monitoring keeps Verification (Security Testing) and Operations (Environment Management) evidence current between full assessments, so the next SAMM cycle starts with current evidence rather than evidence harvested from email threads
Looking for the engagement workflow that supports the assessment record itself? The penetration testing use case and the DevSecOps scanning use case capture how SecPortal turns SAMM Verification and Implementation evidence into structured records covering scope, scanner output, findings, retests, and the deliverable.
For programme-level context on how maturity assessments fit into the wider security delivery operating model, see the security workflow orchestration research and the scaling security consultancy with automation guide, which together cover the operating-model thinking that turns SAMM scores into a sustained programme rather than a one-off slide deck.
Key control areas
SecPortal helps you track and manage compliance across these domains.
Governance: Strategy and Metrics, Policy and Compliance, Education and Guidance
The Governance business function sets the strategic posture for the software security programme. Strategy and Metrics defines the measurable goals and the data the programme reports on. Policy and Compliance aligns the programme to internal policy and external regulation. Education and Guidance equips engineering and security with the training, role guides, and reference material that make secure delivery a default rather than an exception. Capture practice scores, owners, evidence, and target maturity per practice on the engagement record so the assessment is reproducible across cycles.
Design: Threat Assessment, Security Requirements, Security Architecture
The Design business function covers what is decided before code is written. Threat Assessment identifies application threats, abusers, and risk drivers and ties them to the design. Security Requirements turns risk and compliance drivers into concrete requirements that engineering builds against. Security Architecture establishes patterns, reference designs, and supplier security expectations so each new component does not start the conversation over. Score each practice on the SAMM three-level scale and attach the threat models, requirement sets, and pattern catalogues as evidence on the same record.
Implementation: Secure Build, Secure Deployment, Defect Management
The Implementation business function covers the act of producing and shipping software. Secure Build covers build pipeline integrity, dependency management, and reproducibility. Secure Deployment covers the path from artefact to production, including configuration management, secret handling, and rollback safety. Defect Management covers the workflow that intake, triages, and resolves security defects with consistency and traceability. Capture defect intake, triage, and closure on findings records so SAMM Implementation evidence is the live workflow rather than a screenshot.
Verification: Architecture Assessment, Requirements-driven Testing, Security Testing
The Verification business function covers what is checked before and during release. Architecture Assessment audits design decisions against threat assumptions. Requirements-driven Testing turns the security requirements set into tests that can be executed and re-executed. Security Testing covers the breadth of automated and manual testing (DAST, SAST, SCA, manual review, pentest) that exercises the live system. Run authenticated DAST, SAST, and SCA against engagement records, attach the OWASP ASVS verification claim where applicable, and score Verification practices on the underlying evidence rather than self-reported coverage.
Operations: Incident Management, Environment Management, Operational Management
The Operations business function covers what happens after release. Incident Management runs the detection, response, and learning loop for security events. Environment Management hardens the production estate, patches it, and tracks configuration drift. Operational Management covers data lifecycle, backup, and the legal and regulatory obligations that come with running the system. Capture incident records, environment evidence, and operational controls on the same workspace so the SAMM Operations score has audit trail behind it.
The three maturity levels and the SAMM scorecard
Each of the fifteen security practices is scored on a three-level scale: Level 1 (initial understanding and ad hoc provision), Level 2 (increased efficiency and effectiveness), Level 3 (comprehensive mastery at scale). Scores are derived from the SAMM toolbox interview answers (one set of questions per practice and stream) and rolled up into a scorecard per business function. SAMM 2.0 also splits each practice into two activity streams (A and B) so scores reflect breadth as well as depth. Capture the toolbox answers, evidence per question, score per stream, and target score per practice on the engagement record so the scorecard is reproducible at the next cycle.
Roadmap, target operating model, and the SAMM cadence
A SAMM assessment is not the deliverable. The deliverable is the gap between current and target maturity, expressed as a roadmap that names the practices to lift, the activities that lift them, the owner, and the timeline. The standard cadence is annual reassessment, with a quarterly check on a subset of practices that are actively being lifted. Track current scores, target scores, gap rationale, owner per practice, and evidence per increment on the engagement so the next cycle starts where this one left off, not from a blank slate.
Related features
Orchestrate every security engagement from start to finish
Vulnerability management software that tracks every finding
AI-powered reports in seconds, not days
Compliance tracking without a full GRC platform
Find vulnerabilities before they ship
Test web apps behind the login
Monitor continuously catch regressions early
Run OWASP SAMM assessments as structured records
Score practices, capture evidence, build the roadmap, and re-score on cadence. Start free.
No credit card required. Free plan available forever.