Threat Model Template one signed artefact for STRIDE enumeration, mitigation decisions, and verification evidence
A free, copy-ready threat model template structured for STRIDE per element, with optional LINDDUN columns for personal-data flows and PASTA stages for high-risk systems. Eight sections covering system scope, asset inventory, trust boundaries and data-flow diagram, STRIDE enumeration per element, mitigation decisions per threat, verification evidence per mitigation, compliance mapping across ISO 27001, SOC 2, PCI DSS, NIST SSDF, OWASP SAMM, and OWASP ASVS, and document control with sign-off and review cadence. Aligned with ISO/IEC 27001 Annex A 8.27, ISO/IEC 27034, SOC 2 CC1.4 and CC7.2, PCI DSS 4.0 Requirement 6.2.4, NIST SSDF practice PW.1, NIST SP 800-30, OWASP SAMM Threat Assessment, OWASP ASVS V1 Architecture Design and Threat Modelling, and the OWASP Application Threat Modeling guidance.
Carry the threat model on the engagement record, not in a static document folder
SecPortal pairs the signed threat model with the threats it declared as findings, the verification evidence per mitigation, the framework mapping for the audit, and the activity log for the per-revision trail. Free plan available.
No credit card required. Free plan available forever.
Eight sections that turn a whiteboard threat model into a defensible engineering artefact
A threat model is the design-time artefact that names what an internal AppSec team has to test for, what a code review has to enforce, and what a penetration test has to confirm. The eight sections below cover the durable shape of the artefact across STRIDE for technical threats, optional LINDDUN for privacy threats, ISO/IEC 27001 Annex A 8.27, the NIST Secure Software Development Framework practice PW.1, OWASP SAMM Threat Assessment, OWASP ASVS V1, PCI DSS 4.0 Requirement 6.2.4, and SOC 2 CC1.4 and CC7.2. Copy the section that fits your stage and paste the rest as you go.
The template is the artefact the team produces once per system and refreshes on a defined cadence. Pair it with the methodology walkthrough in the threat modelling guide for the underlying STRIDE and PASTA discipline, with the SDLC vulnerability handoff use case for how design-time threats become engineering-owned findings, with the control gap remediation workflow for the workflow that closes a missing mitigation, with the per-finding scoring discipline in the CVSS calculator, with the formal acceptance form in the risk acceptance form template for residual-risk decisions, and with the aggregate ledger in the security exception register template for accepted residuals across the wider programme. The retention rule that names how long the threat model, the diagrams, and the verification evidence are kept under hold and disposition discipline lives in the audit evidence retention policy template.
Copy the full threat model template (all eight sections) as one block.
1. System scope, assumptions, and reviewers
Open the threat model with the durable identity of the system and the discrete reviewers who own it. ISO/IEC 27001 Annex A 8.27 expects documented threat-modelling activity tied to a named system; this section makes the artefact traceable to the wider engineering programme rather than a stand-alone whiteboard photo.
System name: {{SYSTEM_NAME}}
System version covered: {{SYSTEM_VERSION_OR_RELEASE_TRAIN}}
Document version: {{MODEL_VERSION}}
Effective date: {{EFFECTIVE_DATE}}
Last review date: {{LAST_REVIEW_DATE}}
Next review date: {{NEXT_REVIEW_DATE}}
Methodology in use:
- Primary lens: STRIDE (Spoofing, Tampering, Repudiation, Information disclosure, Denial of service, Elevation of privilege)
- Optional privacy lens for personal-data flows: LINDDUN
- Optional business-impact lens for high-risk systems: PASTA stages I-VII as appendix
In scope (this model covers):
- Components: {{IN_SCOPE_COMPONENTS}}
- Data classes: {{IN_SCOPE_DATA_CLASSES}}
- Trust boundaries: {{IN_SCOPE_TRUST_BOUNDARIES}}
- Adversary classes the model assumes: external unauthenticated attacker, external authenticated low-privilege attacker, malicious insider with low-privilege access, compromised dependency, compromised CI/CD pipeline.
Out of scope (deliberately excluded; document the rationale):
- {{OUT_OF_SCOPE_COMPONENTS_AND_RATIONALE}}
- {{OUT_OF_SCOPE_INTEGRATIONS_AND_RATIONALE}}
Standing assumptions (the model relies on these holding; if they break, the model needs revision):
- Identity provider: {{IDENTITY_PROVIDER_AND_TRUST_LEVEL}}
- Transport layer: {{TLS_OR_MTLS_BASELINE}}
- Logging stack: {{LOGGING_AND_AUDIT_BASELINE}}
- Secrets management: {{SECRETS_MANAGEMENT_BASELINE}}
- Build pipeline integrity: {{BUILD_PIPELINE_BASELINE}}
- Hosting platform: {{HOSTING_PLATFORM_BASELINE}}
Reviewers and sign-off:
- System owner (engineering lead or product owner; signs off on mitigation decisions): {{SYSTEM_OWNER_NAME_AND_ROLE}}
- Security partner (AppSec engineer, security architect, or product security engineer; runs the methodology): {{SECURITY_PARTNER_NAME_AND_ROLE}}
- Implementation engineer (developer or platform engineer; validates buildability): {{IMPLEMENTATION_ENGINEER_NAME_AND_ROLE}}
- External challenger (optional, recommended for high-risk systems): {{EXTERNAL_CHALLENGER_NAME_AND_ROLE}}
Frameworks the model evidences:
- ISO/IEC 27001 Annex A 8.27 (Secure system architecture and engineering principles)
- ISO/IEC 27034 (Application security)
- SOC 2 CC1.4 (Commitment to competence) and CC7.2 (Monitoring system components)
- PCI DSS 4.0 Requirement 6.2.4 (Secure software lifecycle threat consideration)
- NIST SSDF (NIST SP 800-218) practice PW.1.1 (Design-time vulnerability identification)
- NIST SP 800-30 (Risk Assessment Guide)
- OWASP SAMM Threat Assessment maturity practice
- OWASP ASVS V1 (Architecture, Design and Threat Modelling)
- OWASP Application Threat Modeling guidance
2. Asset inventory and value classification
Threat modelling without an asset inventory describes adversary moves against an unnamed target. List the data, integrations, secrets, and components that have value to the business, to a customer, or to an adversary. Each asset has an owner so the conversation about residual risk is anchored to a person who can decide.
Trust boundaries are the durable structure of a threat model. STRIDE applies per element, but the categories that matter most often shift at a boundary: authentication and spoofing live where an external entity meets a process; tampering and information disclosure live where data crosses a network or a storage boundary; elevation of privilege lives where authorisation decisions get made. Capture the boundaries explicitly so the threat enumeration in Section 4 has somewhere to land.
Diagram reference (attach or link the data-flow diagram):
- Diagram artefact location: {{DFD_DIAGRAM_LOCATION}}
- Diagram format (Excalidraw, draw.io, Mermaid, OWASP Threat Dragon export, Microsoft Threat Modeling Tool .tm7, IriusRisk export, hand-drawn photo): {{DFD_FORMAT}}
- Diagram revision date: {{DFD_REVISION_DATE}}
External entities (rectangles in the diagram):
| External entity | Description | Trusted? | Authentication mechanism | Adversary class assumed |
| --------------- | ----------- | -------- | ------------------------ | ----------------------- |
| {{ENTITY_1_NAME}} | {{ENTITY_1_DESC}} | {{ENTITY_1_TRUSTED}} | {{ENTITY_1_AUTH}} | {{ENTITY_1_ADVERSARY}} |
| {{ENTITY_2_NAME}} | {{ENTITY_2_DESC}} | {{ENTITY_2_TRUSTED}} | {{ENTITY_2_AUTH}} | {{ENTITY_2_ADVERSARY}} |
Processes (circles in the diagram):
| Process | Function | Privilege level | Hosted where |
| ------- | -------- | --------------- | ------------ |
| {{PROCESS_1_NAME}} | {{PROCESS_1_FUNCTION}} | {{PROCESS_1_PRIVILEGE}} | {{PROCESS_1_HOSTING}} |
| {{PROCESS_2_NAME}} | {{PROCESS_2_FUNCTION}} | {{PROCESS_2_PRIVILEGE}} | {{PROCESS_2_HOSTING}} |
Data stores (parallel lines in the diagram):
| Data store | Engine (Postgres, S3, Redis, Kafka, etc.) | Data classes stored | Encryption at rest | Backup and retention |
| ---------- | ----------------------------------------- | ------------------- | ------------------ | -------------------- |
| {{STORE_1_NAME}} | {{STORE_1_ENGINE}} | {{STORE_1_DATA}} | {{STORE_1_ENCRYPTION}} | {{STORE_1_BACKUP}} |
| {{STORE_2_NAME}} | {{STORE_2_ENGINE}} | {{STORE_2_DATA}} | {{STORE_2_ENCRYPTION}} | {{STORE_2_BACKUP}} |
Data flows (arrows in the diagram):
| Flow | From | To | Protocol | Authentication | Encryption in transit | Data classes carried |
| ---- | ---- | -- | -------- | -------------- | --------------------- | -------------------- |
| {{FLOW_1_LABEL}} | {{FLOW_1_FROM}} | {{FLOW_1_TO}} | {{FLOW_1_PROTOCOL}} | {{FLOW_1_AUTH}} | {{FLOW_1_TLS}} | {{FLOW_1_DATA}} |
| {{FLOW_2_LABEL}} | {{FLOW_2_FROM}} | {{FLOW_2_TO}} | {{FLOW_2_PROTOCOL}} | {{FLOW_2_AUTH}} | {{FLOW_2_TLS}} | {{FLOW_2_DATA}} |
Trust boundaries (dotted lines crossing flows in the diagram):
| Boundary | Location (between which elements) | Who authenticates here | Who authorises here | Who validates input here |
| -------- | --------------------------------- | ---------------------- | ------------------- | ------------------------ |
| {{BOUNDARY_1_NAME}} | {{BOUNDARY_1_LOCATION}} | {{BOUNDARY_1_AUTHN}} | {{BOUNDARY_1_AUTHZ}} | {{BOUNDARY_1_VALIDATION}} |
| {{BOUNDARY_2_NAME}} | {{BOUNDARY_2_LOCATION}} | {{BOUNDARY_2_AUTHN}} | {{BOUNDARY_2_AUTHZ}} | {{BOUNDARY_2_VALIDATION}} |
4. STRIDE threat enumeration per element
Apply STRIDE as a checklist against each element class. External entities map to Spoofing and Repudiation. Processes map to all six categories. Data stores map to Tampering, Repudiation, Information disclosure, and Denial of service. Data flows map to Tampering, Information disclosure, and Denial of service. Walk every element in Section 3 against the categories that apply; the enumeration is the durable backbone the rest of the model rests on.
Threat ID format: T-{{SYSTEM_SHORT_CODE}}-{{NN}} (for example T-CHECKOUT-01). Severity uses CVSS 3.1 base vector or a four-band qualitative scale (critical, high, medium, low) with the rationale captured.
Threats against external entities (Spoofing, Repudiation):
| Threat ID | Element | STRIDE category | Threat description | Adversary class | Pre-mitigation severity |
| --------- | ------- | --------------- | ------------------ | --------------- | ----------------------- |
| {{T_EXT_01_ID}} | {{T_EXT_01_ELEMENT}} | {{T_EXT_01_CATEGORY}} | {{T_EXT_01_DESCRIPTION}} | {{T_EXT_01_ADVERSARY}} | {{T_EXT_01_SEVERITY}} |
| {{T_EXT_02_ID}} | {{T_EXT_02_ELEMENT}} | {{T_EXT_02_CATEGORY}} | {{T_EXT_02_DESCRIPTION}} | {{T_EXT_02_ADVERSARY}} | {{T_EXT_02_SEVERITY}} |
Threats against processes (Spoofing, Tampering, Repudiation, Information disclosure, Denial of service, Elevation of privilege):
| Threat ID | Element | STRIDE category | Threat description | Adversary class | Pre-mitigation severity |
| --------- | ------- | --------------- | ------------------ | --------------- | ----------------------- |
| {{T_PROC_01_ID}} | {{T_PROC_01_ELEMENT}} | {{T_PROC_01_CATEGORY}} | {{T_PROC_01_DESCRIPTION}} | {{T_PROC_01_ADVERSARY}} | {{T_PROC_01_SEVERITY}} |
| {{T_PROC_02_ID}} | {{T_PROC_02_ELEMENT}} | {{T_PROC_02_CATEGORY}} | {{T_PROC_02_DESCRIPTION}} | {{T_PROC_02_ADVERSARY}} | {{T_PROC_02_SEVERITY}} |
| {{T_PROC_03_ID}} | {{T_PROC_03_ELEMENT}} | {{T_PROC_03_CATEGORY}} | {{T_PROC_03_DESCRIPTION}} | {{T_PROC_03_ADVERSARY}} | {{T_PROC_03_SEVERITY}} |
Threats against data stores (Tampering, Repudiation, Information disclosure, Denial of service):
| Threat ID | Element | STRIDE category | Threat description | Adversary class | Pre-mitigation severity |
| --------- | ------- | --------------- | ------------------ | --------------- | ----------------------- |
| {{T_STORE_01_ID}} | {{T_STORE_01_ELEMENT}} | {{T_STORE_01_CATEGORY}} | {{T_STORE_01_DESCRIPTION}} | {{T_STORE_01_ADVERSARY}} | {{T_STORE_01_SEVERITY}} |
| {{T_STORE_02_ID}} | {{T_STORE_02_ELEMENT}} | {{T_STORE_02_CATEGORY}} | {{T_STORE_02_DESCRIPTION}} | {{T_STORE_02_ADVERSARY}} | {{T_STORE_02_SEVERITY}} |
Threats against data flows (Tampering, Information disclosure, Denial of service):
| Threat ID | Element | STRIDE category | Threat description | Adversary class | Pre-mitigation severity |
| --------- | ------- | --------------- | ------------------ | --------------- | ----------------------- |
| {{T_FLOW_01_ID}} | {{T_FLOW_01_ELEMENT}} | {{T_FLOW_01_CATEGORY}} | {{T_FLOW_01_DESCRIPTION}} | {{T_FLOW_01_ADVERSARY}} | {{T_FLOW_01_SEVERITY}} |
| {{T_FLOW_02_ID}} | {{T_FLOW_02_ELEMENT}} | {{T_FLOW_02_CATEGORY}} | {{T_FLOW_02_DESCRIPTION}} | {{T_FLOW_02_ADVERSARY}} | {{T_FLOW_02_SEVERITY}} |
Optional privacy enumeration (LINDDUN; only required if the system processes personal data):
| Threat ID | Element | LINDDUN category | Threat description | Data subject class | Pre-mitigation severity |
| --------- | ------- | ---------------- | ------------------ | ------------------ | ----------------------- |
| {{T_PRIV_01_ID}} | {{T_PRIV_01_ELEMENT}} | {{T_PRIV_01_CATEGORY}} | {{T_PRIV_01_DESCRIPTION}} | {{T_PRIV_01_DATA_SUBJECT}} | {{T_PRIV_01_SEVERITY}} |
| {{T_PRIV_02_ID}} | {{T_PRIV_02_ELEMENT}} | {{T_PRIV_02_CATEGORY}} | {{T_PRIV_02_DESCRIPTION}} | {{T_PRIV_02_DATA_SUBJECT}} | {{T_PRIV_02_SEVERITY}} |
5. Mitigation decisions per threat
Every threat in Section 4 carries a mitigation decision that names the control, the owner, and the residual risk. Threats without a mitigation decision are aspirational; the team has to either implement, accept, or transfer each one. Capture the decision in writing because the audit will read for the named owner and the residual-risk acceptance per finding rather than a generic mitigation paragraph.
Decision categories:
- Mitigate: a named control is in place or planned that reduces the threat to an acceptable residual.
- Accept: the team has decided the residual risk is acceptable and the rationale is captured for the audit; record the accepting role and the acceptance review date.
- Transfer: the threat is delegated to a third party (insurance, the hosting provider security model, an upstream supplier); the transfer agreement and the residual obligation that remains with the team are captured.
- Eliminate: the team has redesigned the feature so the threat does not apply; record the design change and the verification that the redesign is complete.
Mitigation table (one row per threat in Section 4):
| Threat ID | Decision | Control description | Control owner | Verification mechanism | Post-mitigation severity | Notes |
| --------- | -------- | ------------------- | ------------- | ---------------------- | ------------------------ | ----- |
| {{M_01_THREAT_ID}} | {{M_01_DECISION}} | {{M_01_CONTROL}} | {{M_01_OWNER}} | {{M_01_VERIFICATION}} | {{M_01_RESIDUAL}} | {{M_01_NOTES}} |
| {{M_02_THREAT_ID}} | {{M_02_DECISION}} | {{M_02_CONTROL}} | {{M_02_OWNER}} | {{M_02_VERIFICATION}} | {{M_02_RESIDUAL}} | {{M_02_NOTES}} |
| {{M_03_THREAT_ID}} | {{M_03_DECISION}} | {{M_03_CONTROL}} | {{M_03_OWNER}} | {{M_03_VERIFICATION}} | {{M_03_RESIDUAL}} | {{M_03_NOTES}} |
Accepted residual risks:
- For each accepted residual, link the formal risk-acceptance record so the auditor can read the named approver and the review date.
- Risk-acceptance review cadence: {{RISK_ACCEPTANCE_REVIEW_CADENCE}} (six-monthly is the recommended default).
Eliminated threats:
- For each eliminated threat, record the design change and the verification that the redesign is complete.
- Design change references: {{DESIGN_CHANGE_REFERENCES}}
Open mitigation work (threats that have a planned but not yet implemented control):
- The threat is recorded with the planned control, the planned owner, the target completion date, and the interim compensating control.
- Interim compensating-control owner: {{INTERIM_COMPENSATING_CONTROL_OWNER}}
6. Verification evidence per mitigation
A mitigation that has no verification evidence is a claim. The verification section pairs each named control with the test, the static-analysis rule, the code-review check, the integration test, or the pentest finding that demonstrates the control works. Without that pointer the threat model becomes aspirational rather than operational.
Verification table (one row per mitigation in Section 5 with a Mitigate decision):
| Threat ID | Verification mechanism | Evidence location | Last verified date | Verification owner | Outcome |
| --------- | ---------------------- | ----------------- | ------------------ | ------------------ | ------- |
| {{V_01_THREAT_ID}} | {{V_01_MECHANISM}} | {{V_01_EVIDENCE_LOCATION}} | {{V_01_DATE}} | {{V_01_OWNER}} | {{V_01_OUTCOME}} |
| {{V_02_THREAT_ID}} | {{V_02_MECHANISM}} | {{V_02_EVIDENCE_LOCATION}} | {{V_02_DATE}} | {{V_02_OWNER}} | {{V_02_OUTCOME}} |
| {{V_03_THREAT_ID}} | {{V_03_MECHANISM}} | {{V_03_EVIDENCE_LOCATION}} | {{V_03_DATE}} | {{V_03_OWNER}} | {{V_03_OUTCOME}} |
Verification mechanism categories the row above can use:
- Unit test or property-based test in the repository (link to test file and test name).
- Integration or end-to-end test in the test suite (link to scenario).
- Static analysis rule in Semgrep, CodeQL, or a comparable engine (link to rule).
- Software composition analysis check on the dependency manifest (link to policy).
- Dynamic application security test against the running system (authenticated scan, manual probe).
- Code review check enforced through the pull-request template or the review checklist.
- Penetration test finding from a named engagement (link to engagement record).
- Bug bounty programme finding (link to report or recognition).
- Manual architectural review (link to the review record with reviewer name and date).
- Production runtime evidence (log query, monitoring rule, alerting rule).
Verification gaps (mitigations that lack verification today):
- {{VERIFICATION_GAP_THREAT_ID}}: {{VERIFICATION_GAP_REASON}} - planned verification mechanism: {{VERIFICATION_GAP_PLAN}} - target verification date: {{VERIFICATION_GAP_DATE}}
Cadence:
- Unit and integration test verification runs every build.
- Static analysis runs on every pull request that touches in-scope files.
- DAST and pentest verification cadence: {{DAST_AND_PENTEST_CADENCE}}.
- Code-review check cadence: per pull request that touches in-scope files.
- Manual architectural review cadence: at least annually plus on material design change.
7. Compliance mapping and audit-evidence cross-reference
The same threat model evidences several frameworks at the same time. Capture the cross-reference once so the auditor can read the artefact under each lens without you reformatting the data per framework. ISO 27001, SOC 2, PCI DSS, NIST SSDF, OWASP SAMM, and OWASP ASVS are the most-cited destinations; add sector-specific frameworks as they apply.
Framework cross-reference table:
| Framework | Control reference | What this threat model evidences |
| --------- | ----------------- | -------------------------------- |
| ISO/IEC 27001:2022 | A.8.27 Secure system architecture and engineering principles | Documented threat-modelling activity at design time, named system, named reviewers, asset inventory, threat enumeration, mitigation decisions. |
| ISO/IEC 27001:2022 | A.8.25 Secure development lifecycle | Threat consideration as part of the secure development lifecycle. |
| ISO/IEC 27001:2022 | A.5.8 Information security in project management | Security objectives integrated into project design. |
| ISO/IEC 27034 | Application security controls | Application-specific threat-modelling artefact. |
| SOC 2 (TSC 2017) | CC1.4 Demonstrates Commitment to Competence | Documented design-time risk assessment with named reviewers. |
| SOC 2 (TSC 2017) | CC7.2 Monitors System Components | Verification evidence per mitigation feeds the monitoring view. |
| PCI DSS 4.0 | 6.2.4 Software design that addresses common attack types and considers threats during the secure software lifecycle | STRIDE enumeration plus mitigation decisions plus verification evidence. |
| PCI DSS 4.0 | 6.2.1 Bespoke and custom software developed in accordance with industry standards and secure coding practices | Threat model is the durable artefact the secure-coding practice reads. |
| NIST SSDF (SP 800-218) | PW.1.1 Identify Vulnerabilities During Design | Threat enumeration per element. |
| NIST SSDF (SP 800-218) | PW.1.2 Take Action to Address Identified Vulnerabilities | Mitigation decisions per threat. |
| NIST SSDF (SP 800-218) | PW.1.3 Verify Mitigations | Verification evidence per mitigation. |
| NIST SP 800-30 | Risk Assessment | Asset inventory, threat enumeration, residual-risk acceptance. |
| OWASP SAMM v2 | Threat Assessment maturity practice | Per-application threat assessment with defined cadence. |
| OWASP ASVS v4 | V1 Architecture, Design and Threat Modelling | Threat-modelling activity, asset classification, trust boundary documentation. |
| OWASP Application Threat Modeling | All sections | Methodology alignment. |
| {{ADDITIONAL_FRAMEWORK_1}} | {{ADDITIONAL_FRAMEWORK_1_CONTROL}} | {{ADDITIONAL_FRAMEWORK_1_EVIDENCE}} |
| {{ADDITIONAL_FRAMEWORK_2}} | {{ADDITIONAL_FRAMEWORK_2_CONTROL}} | {{ADDITIONAL_FRAMEWORK_2_EVIDENCE}} |
Audit-evidence cross-reference (the artefact identifier the auditor receives for each lens):
- Document identifier: {{DOCUMENT_IDENTIFIER}}
- Document repository: {{DOCUMENT_REPOSITORY_LOCATION}}
- Linked finding records (per mitigation in Section 5 with a Mitigate decision): {{LINKED_FINDING_RECORDS}}
- Linked risk-acceptance records (per mitigation in Section 5 with an Accept decision): {{LINKED_RISK_ACCEPTANCE_RECORDS}}
8. Document control, sign-off, and review cadence
A signed threat model with version history is the artefact the auditor reads. The sign-off panel makes the document the engineering artefact the team operates against rather than a security-team-only document. The review cadence keeps the model current as the system changes.
Document classification: Internal (the threat model contains design context that is not appropriate for external publication; share with auditors under engagement scope).
Document trail:
- Owner: {{SYSTEM_OWNER_NAME_AND_ROLE}}
- Approver: {{GOVERNANCE_APPROVER_NAME_AND_ROLE}}
- Security partner of record: {{SECURITY_PARTNER_NAME_AND_ROLE}}
- Distribution: engineering team, security team, internal audit function, GRC function, external auditor under engagement.
- Related documents: {{RELATED_DOCUMENT_LIST}} (typically the architecture decision record, the secure development lifecycle policy, the risk-acceptance policy, the vulnerability remediation SLA policy, and the audit-evidence retention policy).
Version history:
| Version | Effective date | Approver | Summary of change |
| ------- | -------------- | -------- | ----------------- |
| {{V1_NUMBER}} | {{V1_DATE}} | {{V1_APPROVER}} | {{V1_SUMMARY}} |
| {{V2_NUMBER}} | {{V2_DATE}} | {{V2_APPROVER}} | {{V2_SUMMARY}} |
| {{V3_NUMBER}} | {{V3_DATE}} | {{V3_APPROVER}} | {{V3_SUMMARY}} |
Sign-off panel:
- System owner: {{SYSTEM_OWNER_SIGNATURE_BLOCK}}
- Security partner: {{SECURITY_PARTNER_SIGNATURE_BLOCK}}
- Implementation engineer: {{IMPLEMENTATION_ENGINEER_SIGNATURE_BLOCK}}
- Governance approver: {{GOVERNANCE_APPROVER_SIGNATURE_BLOCK}}
Triggered review (a review is triggered outside the calendar cadence on any of the following):
- New authentication or authorisation mechanism added to the system.
- New external integration that crosses a trust boundary.
- New data class entering the system or a data class moving from internal to regulated.
- New compliance scope (the system enters PCI scope, ISO scope, HIPAA scope, GDPR processor or controller scope, or sector-specific scope).
- Incident with a design-level root cause.
- Major refactor of in-scope code or a meaningful change to the data-flow diagram.
- Material change in the adversary environment that the model assumed away.
Periodic review:
- Annual review at minimum, even if no triggered review has occurred.
- Six-monthly review for high-risk systems handling regulated data or accessible from the public internet.
- Reviews that do not result in a change still record the date and the named reviewer.
Programme metrics that the audit reads:
- Coverage rate: in-scope systems with a current threat model, expressed as a fraction of all in-scope systems.
- Currency rate: threat models within the review cadence, expressed as a fraction of all threat models.
- Verification coverage: threat-mitigation pairs with linked verification evidence, expressed as a fraction of all Mitigate decisions.
- Open mitigation backlog: mitigations with a planned-but-not-implemented status, with the median age and the ninety-fifth percentile age.
Six failure modes the threat model has to design against
Threat-modelling artefacts fail engineering and audit reviews in recognisable patterns. Each failure has a structural fix that the template above is designed to enforce. Read this list before you customise the template so the customisation does not weaken the discipline that makes the threat model defensible.
The diagram never gets drawn so the threat enumeration runs against an unnamed system
The team agrees threat modelling is valuable, schedules the session, but skips the data-flow diagram because a stakeholder cannot agree on the boundary. STRIDE enumeration runs against an implicit shape of the system that each participant has in their head, and the model captures threats against components nobody agrees exist. The fix is to draw the diagram before enumerating threats. Whiteboard photo, Excalidraw export, or a Mermaid block in the document are all acceptable. The diagram is the asset; the enumeration runs against it.
STRIDE is enumerated as a checklist with no link to mitigation decisions
The team walks STRIDE, lists three pages of threats, and ships the document to the audit folder without ever assigning a control or an owner to each threat. The auditor reads a long list of risks the team identified but did not act on. The fix is the mitigation table in Section 5 with one row per threat, the named decision (Mitigate, Accept, Transfer, Eliminate), the named owner, and the residual severity. A threat without a decision is aspirational; the discipline is to force a decision per row.
Mitigations are claimed but never paired with verification evidence
The threat model says the SQL injection threat is mitigated by parameterised queries; the verification section does not exist. The audit asks for the test that demonstrates the mitigation works and the team scrambles. The fix is the verification table in Section 6 with one row per Mitigate decision and an evidence pointer (test file, static-analysis rule, code-review check, pentest finding). Without the pointer the model is a claim; with the pointer it is operational.
The model is produced once at design time and never reviewed against the running system
The team produced an excellent threat model in 2024. The system has shipped four major versions since, two new external integrations, and a new compliance scope. The model still describes the original design. The fix is the explicit last-review-date and next-review-date pair plus the triggered-review list in Section 8 so material change forces an update outside the calendar cadence.
Trust boundaries are implied rather than drawn
The data-flow diagram exists but the trust boundaries that cross it are not marked. STRIDE enumeration cannot tell where authentication, authorisation, or input-validation responsibilities change hands, so threats that live on the boundary go unrecorded. The fix is the trust-boundary table in Section 3 with each boundary named, located between specific elements, and tied to who authenticates, who authorises, and who validates input at that crossing.
The model is the security team artefact rather than the engineering artefact
The security partner ran the session, drafted the document, and stored it in the security-team folder. The engineering team has not seen the artefact since. The fix is the sign-off panel in Section 8 with the system owner, the security partner, the implementation engineer, and the governance approver each signing the document, and the document distribution including the engineering repository so the team that ships the system carries the artefact alongside the architecture decision records.
Ten questions the threat-model review has to answer
Periodic review keeps the threat model current against the running system. Triggered review keeps it current against material design change. Both reviews answer the same ten questions; capture the answers in the document version history so the review trail is reproducible at any moment between audit cycles.
1.Is the system scope still accurate against the current release train, and have any in-scope components been added, removed, or materially refactored since the last review.
2.Has the asset inventory been refreshed against the current data-flow diagram, and have any new data classes, integrations, or secrets entered the system without a corresponding row.
3.Are the trust boundaries still located where the diagram shows them, or has a recent refactor moved authentication, authorisation, or input-validation responsibility to a different element.
4.For every Mitigate decision in Section 5, is the named control still in place, owned, and producing the verification evidence the row in Section 6 expects.
5.For every Accept decision in Section 5, has the linked risk-acceptance record been reviewed within the cadence, and is the named approver still the right approver for the residual.
6.For every Eliminate decision in Section 5, is the design change still in place, and does the verification confirm the elimination has not been undone by a subsequent refactor.
7.Has any framework, regulation, or sector-specific expectation in the period changed in a way that requires an additional row in the compliance-mapping table in Section 7.
8.How many threats added since the last review now have a Mitigate decision with verification, an Accept decision with a current risk-acceptance record, or an Eliminate decision with verified design change.
9.How many in-scope systems are within the review cadence (currency rate), and which systems are overdue for review.
10.How many open mitigations are in the planned-but-not-implemented status, and is the median age trending up or down against the prior period.
How the threat model pairs with SecPortal
The template above is copy-ready as a standalone artefact. The threat-modelling exercise itself runs on whiteboards, in OWASP Threat Dragon, in Microsoft Threat Modeling Tool, in IriusRisk, or in a document editor. SecPortal does not run the methodology and does not replace the engineering judgement that drives a defensible threat model. What SecPortal carries is the operational layer the team builds around the artefact so the document the team produced becomes auditable rather than reproducible from email threads.
The document management feature carries the signed threat model, the version history, the data-flow diagram revisions, the asset-inventory updates, the trust-boundary changes, and the disposition record for retired versions. The findings management feature carries each Mitigate decision in Section 5 of the template as a finding on the engagement record with a CVSS 3.1 vector, an owner, and a per-finding status, so the threat enumeration is a query against the same record that carries the verification evidence and the closure history. The activity log captures the timestamped chain of state changes by user, so the dates the threat model was last reviewed, who signed off, and which mitigations moved from planned to implemented are reproducible at any moment between audit cycles.
The compliance tracking feature maps the artefact to ISO 27001 A.8.27, SOC 2 CC1.4 and CC7.2, PCI DSS 6.2.4, NIST SSDF PW.1, OWASP SAMM Threat Assessment, and OWASP ASVS V1 controls with CSV export, so the same threat-model document can be sliced by framework when an auditor asks. The team management feature carries the role-based access control that decides who can edit the threat model, who approves it, and who reviews verification evidence per mitigation. The AI report generation workflow drafts the leadership briefing or the audit-package narrative from the same engagement data so the engineering view, the GRC view, and the executive view of the threat-model programme are the same record.
The platform does not draw the data-flow diagram, classify threats automatically, compute residual risk, or replace specialist threat-modelling tools (OWASP Threat Dragon, the Microsoft Threat Modeling Tool, IriusRisk, ThreatModeler). The methodology and the judgement remain with the team. SecPortal carries the document, the threats as findings, the verification evidence, the framework mapping, and the audit trail so the artefact becomes the operational record rather than a static deliverable.