OSSTMM
a metrics-driven view of the Open Source Security Testing Methodology Manual
The Open Source Security Testing Methodology Manual (OSSTMM), maintained by ISECOM, defines a measurable security test across four channels and ten modules, with an explicit Risk Assessment Values (RAV) calculation that produces a numeric attack surface metric. Run OSSTMM-aligned engagements with structured rules of engagement, channel and module coverage, RAV inputs, and reporting tracked on one record.
No credit card required. Free plan available forever.
OSSTMM: a measurable security test, not a checklist
The Open Source Security Testing Methodology Manual (OSSTMM) is published by the Institute for Security and Open Methodologies (ISECOM). It defines a security test by what is measured rather than by which tools are used: four channels (Human, Physical, Wireless, Telecommunications, Data Networks), a structured set of modules per channel, and a Risk Assessment Values (RAV) calculation that produces a reproducible operational security number. Where most methodologies describe a sequence of activities, OSSTMM describes a result to be evidenced and a calculation to support it.
OSSTMM sits next to other methodologies rather than replacing them. The Penetration Testing Execution Standard (PTES) gives the engagement workflow. The OWASP Top 10 and OWASP Testing Guide give the deep technical reference for web application testing. The NIST SP 800-115 technical guide gives the federal reference. The MITRE ATT&CK framework gives the adversary technique vocabulary. OSSTMM brings the channel model and the metric.
The four operational channels
OSSTMM is unusual in dividing the engagement by interaction channel rather than by asset type. The choice is deliberate: a single asset (a building, a person, a service) is reachable through different channels with different controls, and a methodology that does not separate them tends to under-test the human and physical paths because they are harder to scope. Capturing channel scope explicitly on the engagement keeps the report honest about what was tested and what was deliberately excluded.
Human channel
Personnel, social engineering, and process controls. Covers awareness, response under pressure, escalation discipline, and the human paths into the organisation that a network test never touches. The channel is in scope when the buyer wants to evidence behavioural controls, not just technical ones.
Physical channel
Facility access, environmental controls, locks, badges, tailgating, and physical authentication paths. Most relevant for engagements on premises where the buyer needs evidence the physical perimeter is consistent with the digital one, including data centres, regulated workspaces, and shared occupancy estates.
Wireless channel
Spectrum, mobile, and short-range radio. Covers Wi-Fi, Bluetooth, RFID, and similar interfaces. OSSTMM keeps wireless as a channel in its own right because the attack surface, the test technique, and the regulatory frame for radio testing are distinct from data-network testing.
Telecommunications channel
Voice, fax, and traditional telecommunications interfaces, including modern unified communications and SIP. The channel persists in OSSTMM because legacy and unified-communications paths still produce findings in many enterprise estates, and the test technique differs from a typical IP services test.
Data Networks channel
The channel most engagements default to: enumeration, service identification, vulnerability validation, exploitation, and configuration review across IP-reachable systems. OSSTMM keeps the channel separate from the others so a single channel scope (Data Networks only) is documented explicitly rather than implied.
The RAV: a reproducible attack surface number
The Risk Assessment Values (RAV) calculation is what gives OSSTMM its compliance and benchmarking appeal. The RAV produces a single number for the operational security state of the test target, derived from porosity (the unprotected attack surface), the controls observed, and the limitations identified. The number is reproducible when inputs are captured consistently across engagements, which makes year-on-year comparison and cross-portfolio reporting tractable. The buyer gets a metric that survives the report cycle; the consultancy gets a deliverable that demonstrates value over time, not just at delivery.
Porosity (operational security baseline)
Visibility (what is detectable), access (what is reachable), and trust (what is allowed without authentication). Porosity describes the unprotected attack surface and is the starting point for the RAV. A higher porosity does not by itself indicate insecurity, but it is the surface against which controls are measured.
Controls (the ten operational controls)
Authentication, indemnification, resilience, subjugation, continuity, non-repudiation, confidentiality, privacy, integrity, and alarm. Each control reduces the operational risk that porosity introduces. OSSTMM captures controls as observed and verified during the test, not declared in policy.
Limitations (vulnerability, weakness, concern, exposure, anomaly)
Vulnerabilities are exploitable issues. Weaknesses degrade controls without being exploitable directly. Concerns relate to operational practice gaps. Exposures are unintended information disclosures. Anomalies are unexplained behaviours that warrant follow-up. The five-way classification produces a sharper finding ledger than a single severity rating alone.
The five-way limitation classification (vulnerability, weakness, concern, exposure, anomaly) is also useful outside an OSSTMM engagement. It produces a sharper finding ledger than a single severity rating because not every issue is a vulnerability: a process gap is a concern, a logging gap is a weakness, an unintended information disclosure is an exposure. The findings management feature captures this kind of structured metadata so the OSSTMM ledger feeds directly into the RAV calculation rather than living in a side spreadsheet.
Engagement type: blind, gray box, tandem, reversal
OSSTMM is explicit that an engagement is interpretable only when the test type is captured. A finding rate from a blind test means something different from a finding rate from a tandem test, and a buyer comparing two engagements without that context will draw the wrong conclusion. Capture the engagement type on the rules of engagement record so the report stands on its own context.
Blind
The tester has no prior knowledge of the target. The target operator has full knowledge of the test. Used to evidence what an external attacker with no insider information can achieve under defensive monitoring.
Double blind
Neither the tester nor the target operator has prior knowledge of each other. A defensive readiness test, not a controls verification test, used to evidence detection and response under realistic conditions.
Gray box
The tester has limited target knowledge (asset list, scoping context). The target operator has full test knowledge. The most common engagement type because it produces actionable findings without spending engagement hours on reconnaissance.
Double gray box
Both sides have partial information. Used where the buyer wants a controls-and-detection test with realistic noise and a tighter feedback loop than a fully blind test would allow.
Tandem
Both sides operate fully transparently. The tester and the target operator collaborate on the test plan. Suited to cooperative compliance reviews where the goal is evidence and remediation, not detection assessment.
Reversal
The tester informs the target operator throughout but the target staff have no prior knowledge. Used to evidence detection capability and response timing under controlled conditions.
The module set per channel
OSSTMM tests each channel through a structured set of modules. The list below summarises the module categories that appear across the standard. Tracking module coverage per channel on the engagement record evidences breadth, not just the modules the assigned tester prefers, and the audit trail makes the RAV inputs verifiable.
- Posture review (regulatory, contractual, and policy posture for the test)
- Logistics (channel-specific test plan, equipment, and authorisation)
- Active detection verification (detection capability under known stimuli)
- Visibility audit (what is enumerable from the channel)
- Access verification (what can be accessed without authentication)
- Trust verification (interaction paths between assets and external entities)
- Controls verification (authentication, integrity, alarm, and the seven other operational controls)
- Process verification (operational practice consistency under stress)
- Configuration verification (technical configuration against agreed baselines)
- Property validation (data and asset ownership, including intellectual property exposure)
- Segregation review (separation of duties, network segmentation, identity boundaries)
- Exposure verification (information disclosure beyond authorised channels)
- Competitive intelligence scouting (information leakage useful to a market adversary)
- Quarantine verification (isolation of suspect activity and quarantined assets)
- Privileges audit (privilege escalation paths and standing entitlement review)
- Survivability validation (resilience under attack and recovery posture)
- Alert and log review (logging quality and alert traceability)
Rules of engagement: signed, channel-aware, before any active testing
OSSTMM expects a signed rules of engagement document covering scope, test type, channel coverage, allowed techniques, escalation, and reporting expectations before any active testing. Channel awareness matters: the rules of engagement for a Wireless channel test carry different authorisations than a Data Networks test, and the rules of engagement for a Human channel test carry different stop-test conditions than a Physical channel test. The free rules of engagement template provides the operational ROE document that an OSSTMM engagement expects to be signed before any reconnaissance starts.
- Scope: in-scope assets per channel, with explicit out-of-scope exclusions per channel and an authorised business owner sign-off
- Test type: blind, double blind, gray box, double gray box, tandem, or reversal, captured in writing so the report is interpretable
- Engagement window: testing windows per channel (Data Networks is rarely 24x7; Human is often constrained to working hours)
- Allowed techniques per channel, including a clear stance on social engineering, denial-of-service, exploitation depth, and data handling
- Communication plan: primary and secondary technical contacts, escalation paths, stop-test conditions, and incident protocol
- Channel-specific authorisation: cloud provider acceptable use, building access permissions, radio licensing exemptions, and telecoms operator notifications where required
- Evidence handling: how raw test output is captured, retained, and transmitted to the buyer (especially for Wireless and Telecommunications)
- Reporting expectations: the Security Test Audit Report (STAR) structure, the technical report structure, and the RAV calculation inputs
- Legal documents: master agreement, statement of work, non-disclosure, and authorisation letter, signed before any active testing starts
Reporting: the STAR and the technical body
OSSTMM defines a Security Test Audit Report (STAR) format that summarises the engagement on a single page or two: scope, channels tested, test type, RAV, and an attestation. The STAR is the buyer-facing executive artefact. The full technical report covers per-finding evidence, reproduction, and remediation. Both belong on the engagement record so the deliverable is reproducible and the audit trail is intact. The pentest report writing guide and the penetration testing report template cover the structural questions in more depth, and the AI report generation feature composes the deliverable from the underlying engagement, channel coverage, RAV inputs, and finding evidence rather than from a blank page.
How OSSTMM compares to PTES, OWASP, NIST 800-115, and ATT&CK
OSSTMM is rarely run in isolation. The strongest pentest programmes pair it with at least one workflow methodology and one technique catalogue. The contrast below is a working operator's view: the practitioner question is which standard to combine with OSSTMM, not which to pick instead of it.
OSSTMM vs PTES
PTES is workflow-shaped: seven sections from pre-engagement through reporting that describe how to run a test. OSSTMM is metrics-shaped: four channels and a numeric RAV that describe how to measure the result. They compose well: use PTES as the engagement scaffold and OSSTMM as the measurement and reporting layer when the buyer wants reproducible numbers across engagements.
OSSTMM vs OWASP Testing Guide
The OWASP Testing Guide is a deep web application test case catalogue. OSSTMM is a multi-channel methodology with explicit measurement. The OWASP guide fits inside the Data Networks channel of OSSTMM as the technical reference for the web application phase. They do not compete; they fill different layers of the same engagement.
OSSTMM vs NIST SP 800-115
NIST SP 800-115 is a US government technical guide covering review, target identification, validation, and reporting. OSSTMM is more prescriptive about measurement and channel coverage and lighter on the federal artefact set. NIST SP 800-115 is often required by reference in compliance contexts; OSSTMM is reached for when the buyer wants a numeric attack surface measurement on top.
OSSTMM vs MITRE ATT&CK
ATT&CK is a knowledge base of adversary tactics and techniques, not a methodology. OSSTMM describes how to run and measure an engagement; ATT&CK describes what attackers do during one. The strongest pentest programmes use OSSTMM (or PTES) to structure the engagement and ATT&CK to tag findings by the techniques they evidence.
Where SecPortal fits in an OSSTMM-aligned engagement
SecPortal is the operating layer for an OSSTMM-aligned engagement. The platform handles scope, channel coverage, module execution evidence, RAV inputs, and the STAR delivery so the engagement runs as a single workflow rather than a long email thread with attachments. For consultancies running OSSTMM engagements on behalf of multiple clients, the security consultants workspace bundles that with branded client portals and findings deduplication across engagements.
- Engagement management captures rules of engagement, channel scope, test type, and authorisation as a structured record on the engagement, so OSSTMM Section 1 logistics become a single source of truth rather than a contract attachment
- Findings management with CVSS 3.1 scoring, 300+ templates, and Nessus or Burp Suite imports lets vulnerabilities, weaknesses, concerns, exposures, and anomalies be captured with consistent metadata for the RAV calculation
- Attack surface management produces the visibility and access inputs OSSTMM porosity scoring depends on: subdomain enumeration, fingerprinting, exposed services, and cloud exposure are tracked with the engagement so the recon record survives the engagement
- External and authenticated scanning produce the raw data Data Networks channel modules depend on; output is retained per scan window and linked to the finding it supports
- AI-generated reports compose the OSSTMM deliverable from the underlying engagement, channel coverage, RAV inputs, and finding evidence, producing executive summary, technical body, STAR section, and remediation roadmap rather than a thin export
- Compliance tracking lets a single OSSTMM engagement satisfy ISO 27001, SOC 2, NIST SP 800-53, and PCI DSS evidence requirements without rebuilding the bundle for each framework audit
Looking for the engagement workflow itself, end-to-end? The penetration testing use case captures how SecPortal turns an OSSTMM-shaped engagement into a structured record covering scope, channels, modules, findings, retests, and the deliverable. For penetration testing firms running OSSTMM consistently across a client portfolio, the same engagement record carries the RAV inputs forward through retests so the year-on-year comparison the methodology promises is operationally tractable.
Need to position OSSTMM alongside a broader methodology comparison? The penetration testing methodology guide covers OSSTMM, PTES, OWASP, and NIST SP 800-115 side by side, including where each one is the strongest fit and how they tend to be layered in practice.
Key control areas
SecPortal helps you track and manage compliance across these domains.
The four channels (Human, Physical, Wireless, Telecommunications, Data Networks)
OSSTMM divides the operational security test into channels rather than asset types. The Human channel covers personnel, social engineering, and process controls. The Physical channel covers facility access, environmental controls, and physical security testing. The Wireless and Telecommunications channels cover spectrum, mobile, and voice infrastructure. The Data Networks channel covers the systems most engagements default to. Capture which channels are in scope on the engagement record so the report reflects the agreed test surface, not a generic vulnerability list.
The ten modules per channel (posture review, logistics, active detection verification, visibility audit, access verification, trust verification, controls verification, process verification, configuration verification, property validation, segregation review, exposure verification, competitive intelligence scouting, quarantine verification, privileges audit, survivability validation, alert and log review)
Each channel is tested through a structured set of modules. The modules cover posture review, logistics and controls verification, active detection verification, visibility audit, access verification, trust verification, controls verification, process verification, configuration verification, property validation, segregation review, exposure verification, competitive intelligence scouting, quarantine verification, privileges audit, survivability validation, and alert and log review. Track module coverage per channel so the engagement evidences breadth, not just depth on the modules a tester prefers.
Operational security testing types (blind, double blind, gray box, double gray box, tandem, reversal)
OSSTMM explicitly classifies engagements by what the tester knows and what the target knows. Blind tests have target knowledge but no operator knowledge. Double blind tests are blind to both sides. Gray box and double gray box tests share partial information. Tandem tests are fully transparent on both sides, and reversal tests inform the operator without the target. Capture the engagement type on the rules of engagement record so the report is interpretable: a finding rate is meaningless without the test type that produced it.
Risk Assessment Values (RAV) and operational security metrics
OSSTMM is unusual in producing a numeric attack surface metric. The RAV calculation combines visibility, access, and trust as the operational security baseline, then applies controls and limitations to produce an actual security number. The RAV is reproducible across engagements when inputs are captured consistently, which is what gives OSSTMM its compliance and benchmarking appeal. Capture porosity (visibility, access, trust), controls (authentication, indemnification, resilience, subjugation, continuity, non-repudiation, confidentiality, privacy, integrity, alarm), and limitations (vulnerability, weakness, concern, exposure, anomaly) on the engagement so the RAV can be derived from the underlying findings rather than calculated separately.
Rules of engagement and Security Test Audit Report (STAR)
OSSTMM requires a signed rules of engagement document covering scope, test type, channels in scope, allowed techniques, escalation, and reporting expectations before any active testing. The standard also defines a Security Test Audit Report (STAR) format that summarises the engagement: scope, channels tested, RAV, attestation. Pair the STAR with the full technical report so the buyer has both a one-page operational summary and a reproducible technical body. Store the rules of engagement, the channel coverage, the modules executed, and the STAR alongside the engagement record so the audit trail is intact.
Trust metrics and the OSSTMM trust analysis
OSSTMM treats trust as a measurable property, not a label. The trust analysis covers ten trust properties (size, symmetry, transparency, consistency, integrity, offsets, value, components, porosity, subjugation) so a tester can describe why a relationship is trusted, what depends on the trust, and what fails if the trust is broken. The trust analysis is what allows OSSTMM engagements to evidence interaction risk in supply chain, third-party, and cloud-dependency contexts where the asset model on its own does not surface the issue.
Related features
Orchestrate every security engagement from start to finish
Vulnerability management software that tracks every finding
AI-powered reports in seconds, not days
Vulnerability scanning tools that map your attack surface
Test web apps behind the login
Map your attack surface before attackers do
Compliance tracking without a full GRC platform
Run OSSTMM engagements with metric-driven evidence
Track channel coverage, module execution, RAV inputs, and STAR delivery on one engagement record. Start free.
No credit card required. Free plan available forever.