Loading...
Loading...
NIST AI Risk Management Framework
Voluntary framework published by the US National Institute of Standards and Technology in January 2023 (AI RMF 1.0) to help organizations design, develop, deploy, and use AI systems trustworthily. Structured around four functions: Govern, Map, Measure, Manage. Complemented by AI RMF Generative AI Profile (July 2024) addressing generative AI risks. Increasingly required by US federal procurement, state regulators, and major enterprises as a de facto standard.
The NIST AI Risk Management Framework (AI RMF 1.0) is a voluntary, sector-agnostic framework published by the US National Institute of Standards and Technology in January 2023. It provides organizations with a common vocabulary and structured approach to identify, assess, and manage risks specific to AI systems throughout their lifecycle.
The framework is structured around four core functions: Govern (cultivate a culture of risk management, define policies, accountabilities, oversight), Map (establish context, categorize the AI system, identify risks and benefits), Measure (analyze, assess, benchmark risks using quantitative and qualitative methods), and Manage (allocate resources, prioritize, respond to risks, monitor over time). Each function decomposes into categories and subcategories with associated practices.
The original AI RMF 1.0 is supplemented by the AI RMF Generative AI Profile (NIST AI 600-1), published July 2024, which addresses risks specific to generative AI systems: confabulation, data leakage, intellectual property exposure, environmental impact, value chain integrity, and information security risks. It is among the first authoritative guidance on managing GenAI risks at production scale.
Although technically voluntary, AI RMF has become a de facto standard in the US: it underpins federal procurement guidance under OMB Memo M-24-10 (March 2024) implementing Executive Order 14110, is referenced by state AI laws (Colorado AI Act explicitly), and is increasingly required by major enterprise procurement. International alignment with ISO/IEC 42001:2023 is strong.
The development of AI RMF was authorized by the National AI Initiative Act of 2020, which directed NIST to develop a voluntary risk management framework for trustworthy AI. NIST conducted an extensive multi-stakeholder process from July 2021 through January 2023, with multiple public drafts, workshops, and feedback periods involving industry, academia, civil society, and federal agencies.
AI RMF 1.0 was released on 26 January 2023, alongside a Playbook (practical implementation guidance) and a Roadmap. The framework was rapidly adopted by federal agencies after President Biden's Executive Order 14110 (October 2023) on safe, secure, and trustworthy AI directed federal use of NIST guidance. Although EO 14110 was partially revoked by Executive Order 14179 (January 2025), the AI RMF itself has remained in effect and continues to expand.
The Generative AI Profile (NIST AI 600-1) was developed in 2023-2024 in response to the rapid mainstreaming of generative AI. NIST is also developing additional profiles for high-stakes domains (healthcare, autonomous systems) and for foundation model evaluation.
For a US-based or US-selling enterprise, AI RMF is the baseline AI governance reference. Organizations adopting it gain three benefits: structured language to discuss AI risk with boards and regulators, alignment with federal procurement requirements, and a path that maps cleanly to international frameworks (EU AI Act, ISO/IEC 42001).
The cost of adoption is moderate compared to a regulatory requirement: AI RMF is voluntary and prescribes outcomes more than specific controls. Organizations can phase implementation by AI use case, prioritizing high-stakes systems first. Tooling support has improved significantly with the GenAI Profile, lifecycle assurance vendors (Credo AI, Holistic AI, Fairly AI), and large consulting firm methodologies.
The strategic risk of not adopting AI RMF is increasingly real: large enterprise customers (financial services, healthcare, utilities) now ask vendors to demonstrate alignment in due diligence; federal contractors must evidence AI RMF practices; and AI insurance underwriting often references AI RMF as a baseline.
For our US-targeted AI engagements, we use AI RMF as the governance scaffolding and combine it with the EU AI Act, ISO/IEC 42001, and sector-specific frameworks (HIPAA for health, FFIEC for financial services). The four functions (Govern, Map, Measure, Manage) define the rhythm of our compliance work; each function delivers identifiable artifacts.
We pay particular attention to the Measure function, often the weakest link: what does "fair", "robust", "explainable" mean for this specific system, with what test datasets, with what acceptable thresholds? This is where AI projects either build defensible governance or accumulate hidden risk.
For generative AI engagements, we apply the Generative AI Profile from day one, with explicit handling of confabulation (RAG with source citation), data leakage (retention controls, output filters), intellectual property (training data audit, output watermarking where appropriate), and information security (prompt injection defenses, output sanitization).
Category of the EU AI Act covering AI systems with significant impact on health, safety, or fundamental rights: biometric identification, cr…
First international management system standard dedicated to artificial intelligence, published by ISO/IEC in December 2023. Specifies requir…
AI architecture pattern where a human validates, adjusts, or supervises AI-generated decisions before they have effect on a user, patient, c…
Free initial scoping. We assess your context and identify concrete levers.