Framework

NIST AI Risk Management Framework
GOVERN, MAP, MEASURE, MANAGE on one operating record

The NIST AI Risk Management Framework (AI RMF 1.0, NIST AI 100-1) is the voluntary, sector-agnostic framework US federal agencies, regulated buyers, and enterprise AI programmes read against when they need to evidence trustworthy AI. This page covers the four core functions (GOVERN, MAP, MEASURE, MANAGE), the seven characteristics of trustworthy AI, the Generative AI Profile (NIST AI 600-1), the Playbook companion, the Profile model, and the audit evidence a workspace-driven AI risk programme is expected to produce.

No credit card required. Free plan available forever.

NIST AI Risk Management Framework explained

The NIST AI Risk Management Framework (AI RMF 1.0) is the voluntary, sector-agnostic framework the National Institute of Standards and Technology published in January 2023 as NIST AI 100-1. It was directed by the National Artificial Intelligence Initiative Act of 2020 and developed through an open consultation with AI researchers, deployers, regulated industry buyers, civil society, and federal agencies. The framework is rights-preserving and use-case-agnostic; it does not mandate specific controls, it organises how an organisation thinks about, identifies, measures, and manages risk in the design, development, deployment, and evaluation of AI systems. The companion Generative AI Profile (NIST AI 600-1, July 2024) extends the framework with the twelve risks unique to or exacerbated by generative AI and the actions an organisation can take across the four core functions to address each.

For internal AppSec teams, product security teams, vulnerability management functions, GRC owners, security engineering teams, and CISOs deploying AI in production or building AI features, AI RMF is the universal vocabulary that lets a single AI risk record be read against ISO/IEC 42001 certification, OWASP LLM Top 10, MITRE ATLAS adversarial tactics, the EU AI Act technical documentation, federal procurement requirements, and customer audit requests. Programmes that already operate ISO 27001 or SOC 2 do not need to migrate; AI RMF sits alongside the existing information-security stack and carries the AI-specific risk-management vocabulary the wider standards do not.

The four core functions of AI RMF 1.0

AI RMF organises AI risk management into four core functions. GOVERN is cross-cutting and feeds the other three; MAP, MEASURE, and MANAGE are operational and run as a cycle per AI system. The function names are the highest-level vocabulary the framework uses; the subcategories beneath each function (numbered as GOVERN 1 through GOVERN 6, MAP 1 through MAP 5, MEASURE 1 through MEASURE 4, and MANAGE 1 through MANAGE 4 in the version 1.0 numbering) are what the work is actually planned and reported against.

GOVERN (cross-cutting)

GOVERN cultivates and embeds the risk management culture, policies, processes, and structures the other three functions operate inside. It carries leadership accountability, AI risk strategy and tolerance, the named roles for AI development and oversight, the policy hierarchy AI work runs against, supplier and third-party AI risk management, the workforce competence requirements, and the mechanism by which AI risk decisions are reviewed by leadership. GOVERN is cross-cutting rather than sequential: it is the input the rest of the framework reads against.

MAP

MAP establishes the context an AI system is being designed, developed, deployed, or evaluated in and identifies the risks that context produces. The function names the intended purpose, the categorisation of the system, the deployment context, the affected stakeholders, the legal and regulatory expectations, the third-party data and model sources, the metrics that signal risk, and the impacts the system can produce on individuals, communities, and the organisation. MAP is the foundation; if it is shallow, MEASURE and MANAGE inherit the shallowness.

MEASURE

MEASURE applies tools, techniques, and methodologies to analyse, assess, benchmark, and monitor the AI risks identified in MAP. The function covers selecting the appropriate methods, evaluating the trustworthy characteristics, tracking metrics, validating results, and gathering feedback from affected communities. MEASURE produces the evidence the rest of the programme acts on; without it, MANAGE is asserting risk reduction rather than demonstrating it.

MANAGE

MANAGE allocates risk resources to mapped and measured risks on a regular basis. The function decides which risks the programme accepts, mitigates, transfers, or escalates; it documents incidents and adverse events, monitors third-party AI risks, plans for incident response, and feeds lessons learned back to GOVERN. MANAGE is where the operating decisions land and where the audit trail accumulates: the disposition per risk, the named owner, the review cadence, and the evidence that closure was real rather than asserted.

The seven characteristics of trustworthy AI

Trustworthy AI is the criterion the framework uses to evaluate the AI system. The seven characteristics are not independent: tradeoffs across them are explicit in the framework, and the documented disposition is part of the audit evidence. The MEASURE function evaluates the AI system against each characteristic; the MAP function uses them as the design principles risk identification reads against; and the MANAGE function carries the operating expectation that the characteristics are maintained.

  • Valid and reliable: the AI system performs as intended for its specified purpose, with measurable accuracy, robustness, and reliability across the deployment context. The validity claim is a measurement claim, not an assertion, and the evidence references read against the test set, the deployment monitoring, and the drift detection.
  • Safe: the AI system does not, under defined conditions, lead to a state in which human life, health, property, or the environment is endangered. Safe is a context-dependent characteristic; the safety claim has to read against the use case, the deployment surface, and the failure modes the MAP function identified.
  • Secure and resilient: the AI system can maintain confidentiality, integrity, and availability through protection mechanisms that prevent unauthorised access and use, and it can return to normal function after an unexpected adverse event. This is the characteristic that pulls in classical security testing (authenticated DAST, SAST, SCA, threat modelling) alongside AI-specific testing (prompt injection, model extraction, training data poisoning, adversarial input).
  • Accountable and transparent: the AI system has clear roles, responsibilities, and accountability for outcomes. Information about the system (purpose, capabilities, limitations, training data sources, evaluation methods) is disclosed at a level appropriate for the stakeholder. Transparency is the mechanism the principle uses; accountability is the consequence.
  • Explainable and interpretable: the operation of the system can be understood at a level appropriate for the audience. Explainability covers the reasoning the system used for a specific output; interpretability covers the reasoning humans can derive about how the system makes decisions in general. The two are related but not identical.
  • Privacy-enhanced: the AI system protects the autonomy, identity, and dignity of individuals by safeguarding values such as anonymity, confidentiality, and control. Privacy-enhanced AI reads against data minimisation, purpose limitation, individual control, training data lineage, and the privacy properties of model outputs.
  • Fair with harmful bias managed: the AI system promotes equity by addressing harmful bias and discrimination. The framework explicitly notes that fairness is contested and context-specific; the operational expectation is that the programme identifies the relevant harms, measures the disparate impact, and documents the mitigations and the tradeoffs.

The GOVERN function in operating practice

GOVERN is the function most programmes underestimate. The framework expects the operating record (policy, named roles, training, supplier oversight, stakeholder engagement, leadership review cadence) rather than the signed policy document. The list below is the practical content of the function in the vocabulary AI RMF 1.0 uses and what each subcategory looks like in a workspace-driven programme.

  • GOVERN 1: AI risk management policies, processes, procedures, and practices are established and transparent across the organisation. The policy hierarchy is documented, dated, and assigned an owner; the review cadence is recorded; the relationship to the wider information security policy stack is explicit so AI risk does not run parallel to the rest of the programme.
  • GOVERN 2: Accountability structures are in place so that the appropriate teams and individuals are empowered, responsible, and trained for mapping, measuring, and managing AI risks. Named roles cover AI risk owner, AI safety lead, AI security lead, ML engineering lead, AI ethics or fairness lead, and the executive sponsor; training requirements per role are recorded.
  • GOVERN 3: Workforce diversity, equity, inclusion, and accessibility processes are prioritised in the mapping, measuring, and managing of AI risks. The framework explicitly expects this as a structural commitment rather than an aspirational statement; the evidence is the participation record across MAP, MEASURE, and MANAGE activity.
  • GOVERN 4: Organisational teams are committed to a culture that considers and communicates AI risk. The communication record covers internal stakeholders, external stakeholders, and the affected communities; the framework names the absence of a communication record as a documented failure mode.
  • GOVERN 5: Processes are in place for robust engagement with relevant AI actors. AI actors include developers, deployers, end users, and impacted parties; engagement is structured rather than informal, with the cadence and the escalation path recorded.
  • GOVERN 6: Policies and procedures are in place to address AI risks and benefits arising from third-party software and data and other supply chain issues. Third-party AI risk reads against the model supplier, the training data supplier, the inference platform, the embedding service, the agent toolchain, and the downstream API integrations the AI system depends on.

The Generative AI Profile (NIST AI 600-1)

Programmes deploying or building on generative AI read AI RMF 1.0 alongside the GenAI Profile rather than as a substitute. NIST AI 600-1, published July 2024, is the cross-sectoral Profile for managing the risks of generative AI. It identifies twelve risks unique to or exacerbated by generative AI and proposes actions an organisation can take across GOVERN, MAP, MEASURE, and MANAGE to address each. The twelve risks are CBRN information, confabulation, dangerous or violent recommendations, data privacy, environmental impacts, harmful bias and homogenisation, human-AI configuration, information integrity, information security, intellectual property, obscene degrading and abusive content, and value chain and component integration risks. The Profile is voluntary and non-prescriptive; programmes pick the actions that fit their context, their generative AI system, and their risk tolerance. The companion OWASP LLM Top 10 explainer covers the threat catalogue most LLM application security programmes use alongside the GenAI Profile.

How an AI RMF programme operates across the cycle

An AI RMF programme runs as a structured engagement rather than an annual report. The cycle below is the practical ordering most teams follow when AI RMF is treated as the operating framework rather than a reporting wrapper. The cycle compounds: each re-baseline starts from the prior Profile, the Profile reads against the next cycle, and the evidence pack accumulates rather than being rebuilt.

  1. 1Establish the GOVERN baseline. Name the executive owner, document the AI risk management strategy, record the AI risk tolerance, publish the AI policy hierarchy, capture the supplier oversight model for AI components, and record the workforce competence and training expectations per role. The GOVERN baseline is the input every other function reads against.
  2. 2Inventory the in-scope AI systems. Walk the deployed AI systems, the AI systems in build, and the AI systems under evaluation. Capture the intended purpose, the deployment context, the categorisation, the third-party data and model sources, the affected stakeholders, and the legal and regulatory expectations per system. The inventory is the input MAP works against; without it, MAP is generic.
  3. 3Build a Use-Case Profile per AI system. Translate the AI RMF subcategories into the specific operating story for the system in question. The Profile names the in-scope subcategories, the implementation actions, the evidence references, and the metrics MEASURE will produce. Borrow from a published Profile (such as the GenAI Profile, NIST AI 600-1) where one applies.
  4. 4Run the MEASURE cycle. Evaluate the trustworthy characteristics with measurable evidence: validity and reliability test results, safety failure-mode reviews, security testing (classical and AI-specific), accountability and transparency disclosures, explainability and interpretability evaluations, privacy reviews, fairness diagnostics. The MEASURE output updates the Profile and feeds MANAGE.
  5. 5Operate MANAGE on a cadence. Treat each mapped and measured risk through accept, mitigate, transfer, or escalate, with named owner, deadline, and review cadence per disposition. Capture AI incidents and adverse events as they occur, with the post-event improvements feeding back into GOVERN. The MANAGE record is what the audit and the leadership review read against.
  6. 6Re-baseline. Roll the per-subcategory progress into the leadership report, refresh the per-system Use-Case Profile to reflect the new state, and start the next cycle from the new baseline rather than from a blank page. The re-baseline is what makes the framework durable across model version changes, deployment expansion, and leadership turnover.

Failure modes the framework is designed to surface

AI RMF is forgiving on the choice of controls, the choice of measurement methods, and the choice of risk tolerance. It is unforgiving about a small number of patterns that make the framework cosmetic rather than operational. The patterns below are the ones that recur across adoptions and that erode the year-over-year continuity the framework relies on.

  • Treating AI RMF as an annual audit deliverable rather than an operating posture. The framework is voluntary and outcome-based, but the value comes from running GOVERN, MAP, MEASURE, and MANAGE on a continuous cadence rather than reconstructing the four functions from a year-end notebook. Programmes that build the AI RMF artefacts only when an auditor asks lose the year-over-year continuity the Profile model relies on.
  • Conflating GOVERN with the policy document. GOVERN is cross-cutting and operational. A signed policy that does not produce decisions, evidence, named owners, training records, supplier reviews, and leadership oversight minutes is not GOVERN; it is a document. The framework is unforgiving about this conflation because every other function reads against the operating record GOVERN produces.
  • Treating MAP as a one-off categorisation. MAP refreshes when the AI system changes context (a new use case, a new stakeholder group, a regulatory addition, a new third-party model, a new retrieval source, a tooling change). Programmes that map once at deployment and never revisit MAP inherit a categorisation that no longer matches the system in production.
  • Replacing MEASURE evidence with vendor claims. The Playbook explicitly expects measurable evidence: test results, drift metrics, fairness diagnostics, security test outcomes, privacy reviews. Programmes that substitute supplier marketing for measurement carry an evidence pack that fails on any close audit read, particularly under the GenAI Profile where the measurement expectations are sharper.
  • Skipping the Profile. The Profile is the planning instrument that translates the framework into the AI system in question. Programmes that try to operate AI RMF without a Use-Case Profile end up with generic disposition language and lose the targeted prioritisation the framework is designed for.
  • Running AI risk management parallel to information security. The trustworthy characteristics include "secure and resilient", and the framework explicitly cross-references to security risk management. Programmes that stand up an AI risk register that does not feed and read from the wider security programme duplicate evidence, miss findings, and produce contradictory dispositions across the two registers.

Evidence the framework expects to see, organised against the functions

The AI RMF evidence pack reads well when it is built as a side effect of the operating work rather than reconstructed at year-end. The minimum set below maps to the subcategories examiners and customer auditors most often ask against, and the same artefacts feed parallel reads under ISO/IEC 42001, OWASP LLM Top 10, MITRE ATLAS, ISO 27001, and SOC 2 when the underlying record is structured.

  • AI risk management policy with named owner, review cadence, and explicit relationship to the wider information security policy stack (GOVERN 1)
  • Named roles register covering AI risk owner, AI safety lead, AI security lead, ML engineering lead, fairness or ethics lead, and the executive sponsor, with training requirements per role documented (GOVERN 2)
  • AI system inventory naming each in-scope AI system, the intended purpose, the deployment context, the categorisation, the third-party data and model sources, and the affected stakeholders (MAP 1, MAP 2, MAP 3)
  • Risk identification record per AI system covering benefits, costs, environmental impacts, and the harms the system can produce on individuals, communities, and the organisation (MAP 5)
  • Trustworthy characteristics evaluation record per AI system covering validity and reliability, safety, security and resilience, accountability and transparency, explainability and interpretability, privacy enhancement, and fairness with harmful bias managed (MEASURE 2)
  • Test results, monitoring metrics, and drift detection evidence per AI system, refreshed on a documented cadence (MEASURE 4)
  • Risk treatment register per AI system naming the disposition (accept, mitigate, transfer, escalate), the owner, the review cadence, and the closure or re-evaluation evidence (MANAGE 1, MANAGE 2)
  • AI incident and adverse event log covering each event from declaration through to closure, with named decisions and the post-incident improvements logged back to GOVERN (MANAGE 4)
  • Third-party and supply chain AI risk record covering the model supplier, the training data supplier, the inference platform, the embedding service, the agent toolchain, and the downstream integration points (GOVERN 6, MANAGE 3)
  • Stakeholder engagement and feedback record covering internal stakeholders, external stakeholders, and impacted communities, with the cadence and the resulting changes documented (GOVERN 5)
  • Profile artefact (Use-Case Profile or sector Profile) that translates the AI RMF subcategories into the specific operating story for the AI system in question, with evidence references per subcategory

How AI RMF relates to ISO/IEC 42001, OWASP LLM Top 10, MITRE ATLAS, and the wider regime

AI RMF is an outcome and risk-management framework, not a control catalogue, certification scheme, or regulation. It composes with the AI-specific and information-security frameworks programmes already operate against. The relationships below are the ones programmes most often need to read together.

NIST AI RMF vs ISO/IEC 42001

NIST AI RMF is a voluntary risk-management framework that produces an operating posture and an evidence pack. ISO/IEC 42001 is a certifiable AI management system standard. The two compose well: ISO/IEC 42001 is the management-system anchor a GRC function audits and certifies against; NIST AI RMF is the risk-management vocabulary the day-to-day work reads against. Programmes that operate both use AI RMF as the operating layer and ISO/IEC 42001 as the management-system layer.

NIST AI RMF vs OWASP LLM Top 10

OWASP LLM Top 10 is a community-maintained risk catalogue for LLM applications. NIST AI RMF is the risk management framework that organises the response. The LLM Top 10 maps cleanly into the AI RMF MAP and MEASURE functions: each LLM Top 10 entry surfaces in MAP as a categorised risk and in MEASURE as a security-and-resilience characteristic test outcome. Programmes use the LLM Top 10 as the threat catalogue and AI RMF as the framework that organises the dispositions.

NIST AI RMF vs MITRE ATLAS

MITRE ATLAS catalogues adversarial tactics and techniques against ML and AI systems, modelled on the MITRE ATT&CK pattern. AI RMF references ATLAS as a source for adversarial threat modelling. Programmes use ATLAS as the adversarial tactics catalogue inside MAP (threat identification) and MEASURE (security and resilience evaluation), with AI RMF as the framework that places the ATLAS-driven evidence into the broader risk-management narrative.

NIST AI RMF vs Executive Order 14110 and OMB M-24-10

US Executive Order 14110 (October 2023) and OMB Memorandum M-24-10 (March 2024) reference NIST AI RMF as the framework federal agencies use to manage AI risk. Federal contractors and federally regulated programmes already use AI RMF as the operating reference. The 2025 deregulation activity changed the executive-order landscape, but AI RMF remains the technical reference NIST publishes and the framework most enterprise procurement and audit cycles read against.

NIST AI RMF vs the EU AI Act

The EU AI Act is a regulation with binding requirements on providers, deployers, importers, and distributors of AI systems placed on the EU market. NIST AI RMF is a voluntary framework. The two compose: the AI Act requires risk management, data governance, technical documentation, transparency, human oversight, accuracy and robustness, and cybersecurity, all of which read cleanly against NIST AI RMF subcategories. Programmes operating against the AI Act commonly use AI RMF as the operating framework that produces the evidence the AI Act articles demand.

NIST AI RMF vs ISO 27001 and SOC 2

ISO 27001 and SOC 2 cover information security management at the organisational level. AI RMF covers AI-specific risk management. The trustworthy "secure and resilient" characteristic in AI RMF reads directly against ISO 27001 Annex A and SOC 2 CC controls; programmes already operating ISO 27001 or SOC 2 hold material evidence that satisfies the security-and-resilience side of MEASURE without rebuilding the bundle.

Where SecPortal fits in an AI RMF programme

SecPortal is the operating layer for the AI RMF cycle, not a replacement for the NIST framework, the GenAI Profile, the Playbook, or the underlying control catalogues the programme reads against. The platform handles the GOVERN baseline, the per-system MAP, the MEASURE evidence, the MANAGE disposition record, and the leadership reporting so the cycle runs as a structured workflow rather than a slide deck refreshed quarterly. The same workspace that hosts the Profile work hosts the SAST, SCA, authenticated DAST, external scanning, and pentest evidence the secure-and-resilient characteristic consumes, so the line from artefact to outcome stays traceable.

  • Engagement management dedicated to the AI RMF cycle, with phases (GOVERN baseline, per-system MAP, MEASURE evidence collection, MANAGE disposition, re-baseline) tracked as workstreams rather than as one document stitched together at the end
  • Findings management with CVSS 3.1 scoring, structured fields, and tags so AI-specific findings (prompt injection, jailbreak, training data exposure, model extraction, RAG poisoning, agent privilege escalation, output handling regression, unbounded consumption) sit alongside classical AppSec findings on the same defensible record
  • Compliance tracking that maps the same evidence pack across NIST AI RMF subcategories, the GenAI Profile actions, ISO/IEC 42001 management-system clauses, OWASP LLM Top 10 categories, MITRE ATLAS techniques, ISO 27001 Annex A, and SOC 2 trust services criteria, so the cross-framework footprint reads from a single source rather than a manually reconciled spreadsheet stack
  • Document management for the AI risk policy, the AI system inventory, the per-system Use-Case Profile, the trustworthy-characteristics evaluation record, the risk treatment register, the AI incident log, the third-party AI risk record, and the stakeholder engagement evidence
  • AI report generation that turns the engagement evidence and the Profile gap-analysis output into a structured leadership report and a board-ready summary covering the GOVERN posture, the MAP coverage, the MEASURE evidence, and the MANAGE disposition rate, without manual rewriting
  • Activity log that captures every state change to a finding, a Profile entry, an AI risk register item, or a trustworthy-characteristics evaluation, with timestamp and named user, so the trail is reproducible at audit time without a multi-team excavation
  • Code scanning (SAST and SCA) against connected repositories so the secure-and-resilient characteristic carries a coverage and finding record across the AI codebase, the model-serving layer, and the agent tooling, rather than asserted from a deployment narrative
  • Team management with role-based access for the AI risk owner, the AI safety lead, the AI security lead, the ML engineering lead, the fairness or ethics lead, and the executive sponsor named under GOVERN 2, so the engagement record carries the accountability structure the framework expects

The MEASURE and MANAGE functions are where most of the day-to-day AI security and risk work lives. AI-specific findings raised under the secure-and-resilient characteristic (prompt injection, jailbreak, training data exposure, model extraction, RAG poisoning, agent privilege escalation, output handling regression, unbounded consumption) need owners, deadlines, and verification evidence that walks back to the AI RMF subcategory they affect. The SDLC vulnerability handoff workflow keeps the line from design-time threat enumeration to in-production AI finding auditable. The control-gap remediation workflow records the disposition per gap that MANAGE 1 expects across the AI RMF subcategories. The vulnerability acceptance and exception management workflow records the documented exceptions GOVERN 1 and MANAGE 1 require. The control-mapping cross-framework crosswalk workflow carries the AI RMF to ISO/IEC 42001 to OWASP LLM Top 10 to ISO 27001 reconciliation a single AI risk record needs to feed.

For internal teams running the programme, the internal security teams workspace bundles the platform with the engagement structure AI RMF expects across the cycle. For the application security function that owns the per-feature security review and the LLM-aware testing programme, the AppSec teams workspace covers the same mechanics from the engineering-adjacent angle. For the product security function that owns the per-release posture across LLM and non-LLM features, the product security teams workspace carries the integrated finding lifecycle. For the GRC function that carries the cross-framework evidence pack and the GOVERN 6 supplier risk record, the GRC and compliance teams workspace covers the audit-side discipline that turns the artefacts into a portable evidence pack.

For security leaders carrying the GOVERN 2 executive sponsorship and the GOVERN 4 organisational commitment the framework expects, the CISOs and security leaders workspace covers the program-level reporting model that sits on top of the AI RMF operating record. For the prompt-injection class of finding most LLM applications carry through MEASURE 2, the prompt injection vulnerability page covers the detection, validation, and remediation reference. For the underlying compliance engine that maps the AI RMF subcategories to the wider framework footprint, the compliance tracking capability produces the cross-framework view a single AI risk record feeds. For the leadership reporting cadence GOVERN 4 carries into the operational rhythm, the continuous control monitoring cadence research covers the operating discipline that pairs with AI RMF MEASURE on a cadence rather than at annual reset.

Key control areas

SecPortal helps you track and manage compliance across these domains.

GOVERN: the cross-cutting function

GOVERN cultivates and embeds the risk management culture, policies, processes, and structures that the other three functions operate inside. It carries leadership accountability, AI risk strategy and tolerance, the named roles for AI development and oversight, the policy hierarchy that AI work runs against, supplier and third-party AI risk management, the workforce competence requirements, and the mechanism by which AI risk decisions are reviewed by leadership. GOVERN is cross-cutting rather than sequential: it is the input the rest of the framework reads against, and it is the function that turns AI risk management from a project into an operating posture.

MAP: context and risk identification

MAP establishes the context in which an AI system is being designed, developed, deployed, or evaluated and identifies the risks that context produces. The function names the intended purpose, the categorisation of the AI system, the deployment context, the affected stakeholders, the legal and regulatory expectations, the third-party data and model sources, the metrics that signal risk, and the impacts the system can produce on individuals, communities, and the organisation. MAP is the foundation; if MAP is shallow, MEASURE and MANAGE inherit the shallowness because they read against the categorisation MAP produced.

MEASURE: analysis and quantification

MEASURE applies tools, techniques, and methodologies to analyse, assess, benchmark, and monitor the AI risks identified in MAP. The function covers selecting the appropriate methods, evaluating trustworthy characteristics (validity and reliability, safety, security and resilience, accountability and transparency, explainability and interpretability, privacy enhancement, fairness with harmful bias managed), tracking metrics, validating results, and gathering feedback from affected communities. MEASURE produces the evidence the rest of the programme acts on; without it, MANAGE is asserting risk reduction rather than demonstrating it.

MANAGE: risk treatment and operational response

MANAGE allocates risk resources to mapped and measured risks on a regular basis. The function decides which risks the programme accepts, mitigates, transfers, or escalates; it documents incidents and adverse events, monitors third-party AI risks, plans for incident response, and feeds lessons learned back to GOVERN. MANAGE is where the operating decisions land and where the audit trail accumulates: the disposition per risk, the named owner, the review cadence, and the evidence that closure was real rather than asserted.

Trustworthy AI: the seven characteristics

AI RMF 1.0 defines trustworthy AI through seven characteristics: valid and reliable; safe; secure and resilient; accountable and transparent; explainable and interpretable; privacy-enhanced; and fair with harmful bias managed. These characteristics are the criteria MEASURE evaluates against, the design principles MAP uses to identify risk, and the operating expectations MANAGE has to maintain. The characteristics are not independent; tradeoffs across them are explicit in the framework and the documented disposition is part of the audit evidence.

The Generative AI Profile (NIST AI 600-1)

NIST AI 600-1, published July 2024, is the cross-sectoral Profile for managing the risks of generative AI. It identifies twelve risks unique to or exacerbated by generative AI (CBRN information, confabulation, dangerous or violent recommendations, data privacy, environmental impacts, harmful bias and homogenisation, human-AI configuration, information integrity, information security, intellectual property, obscene degrading and abusive content, and value chain and component integration risks) and proposes actions an organisation can take across GOVERN, MAP, MEASURE, and MANAGE to address each. Programmes deploying or building on generative AI read AI RMF 1.0 alongside NIST AI 600-1 rather than as a substitute.

The Profile model

A Profile is a use-case-specific or sector-specific application of the framework. A Use-Case Profile articulates how a specific AI use case (a customer-service chatbot, a medical imaging classifier, an internal coding assistant, a fraud detection model, a generative AI assistant) reads the AI RMF functions and characteristics for the context. NIST publishes the GenAI Profile (NIST AI 600-1) and supports the development of cross-sector and sector-specific Profiles. A Profile is the planning instrument that turns the framework from a generic structure into a specific operating story for the AI system in question.

The AI RMF Playbook

The Playbook is the companion implementation resource maintained by NIST that provides suggested actions, recommendations, references, and example documentation against each subcategory of the four core functions. The Playbook is voluntary and non-prescriptive; programmes pick the actions that fit their context, their AI system, and their risk tolerance. The Playbook is what most teams use when they need to translate a subcategory into a concrete piece of work.

Run a defensible AI RMF programme on one operating record

Hold the GOVERN baseline, the per-system MAP, MEASURE evidence, and MANAGE dispositions on one workspace, then carry the same record into ISO 27001, SOC 2, OWASP LLM Top 10, and ISO 42001 reads. Start free.

No credit card required. Free plan available forever.