πŸ‡ΊπŸ‡ΈUnited States Β· Voluntary Framework

NIST AI RMF

NIST AI Risk Management Framework 1.0

Published January 2023
01
Chapter 01

Overview & Purpose

Understanding the AI Risk Management Framework

What is the NIST AI RMF?

The NIST AI Risk Management Framework (AI RMF 1.0), published in January 2023 as NIST AI 100-1, is a voluntary set of guidelines designed to help organizations manage risks associated with AI systems. Unlike prescriptive regulations, the AI RMF provides a flexible, technology-agnostic, and sector-neutral approach that any organization β€” regardless of size or industry β€” can adopt to build trustworthy AI systems.

The framework was developed through extensive multi-stakeholder collaboration, drawing input from the private sector, academia, civil society, and government. It is designed to be used alongside existing risk management processes and can be integrated into an organization's broader enterprise risk management strategy.

Scope and Applicability

The AI RMF applies to all stages of the AI lifecycle β€” from design and development through deployment, use, and eventual decommissioning. It is intended for use by AI system designers, developers, deployers, evaluators, and governance bodies. The framework is deliberately broad: it does not prescribe specific technical solutions but instead encourages organizations to establish robust processes for identifying, assessing, and mitigating AI-related risks.

Importantly, the AI RMF is not a compliance checklist. It is a living document meant to evolve alongside AI technology and the understanding of AI risks. Organizations are encouraged to adapt the framework to their specific context, risk tolerance, and operational needs.

Seven Characteristics of Trustworthy AI

The AI RMF identifies seven key characteristics that define trustworthy AI systems. These characteristics are interconnected and should be considered holistically:

**1. Valid and Reliable** β€” AI systems should perform as intended and produce consistent, accurate results under expected conditions. Validation and reliability testing should occur throughout the AI lifecycle.
**2. Safe** β€” AI systems should not endanger human life, health, property, or the environment. Safety considerations should be integrated from the earliest stages of design.
**3. Secure and Resilient** β€” AI systems should withstand adversarial attacks, unexpected inputs, and environmental changes. Security measures should protect against data poisoning, model theft, and adversarial manipulation.
**4. Accountable and Transparent** β€” Organizations should be able to explain how AI systems work, who is responsible for their outcomes, and how decisions are made. Transparency enables accountability.
**5. Explainable and Interpretable** β€” AI system outputs should be understandable to relevant stakeholders. The level of explainability should be appropriate to the context and the potential impact of the AI system's decisions.
**6. Privacy-Enhanced** β€” AI systems should protect individual privacy throughout the data lifecycle. Privacy-by-design principles should guide data collection, use, and retention.
**7. Fair with Harmful Bias Managed** β€” AI systems should be designed and deployed to minimize harmful biases. Fairness considerations should address both statistical bias and systemic discrimination.
02
Chapter 02

GOVERN

Cultivating a Culture of AI Risk Management

The Govern Function

The GOVERN function is the cross-cutting foundation of the AI RMF. It establishes the organizational culture, policies, and structures necessary for effective AI risk management. Unlike the other three functions (Map, Measure, Manage), GOVERN is not sequential β€” it permeates and supports all other activities.

Effective governance ensures that AI risk management is not an afterthought but is embedded into the organization's DNA. This includes establishing clear accountability structures, defining risk tolerances, ensuring workforce diversity, and maintaining robust oversight of third-party AI systems.

Govern 1: Policies and Procedures

Organizations should establish comprehensive policies, processes, procedures, and practices for AI risk management. This includes:

**Legal and Regulatory Awareness (1.1)** β€” Understanding and documenting all applicable legal and regulatory requirements related to AI systems.
**Trustworthy AI Integration (1.2)** β€” Incorporating the seven characteristics of trustworthy AI into organizational policies and decision-making processes.
**Risk Tolerance (1.3)** β€” Establishing clear processes for determining and communicating the organization's AI risk tolerance levels.
**Transparency (1.4)** β€” Creating transparent risk management policies that stakeholders can understand and evaluate.
**Monitoring and Review (1.5)** β€” Planning for ongoing monitoring and periodic review of AI systems and risk management processes.
**AI System Inventory (1.6)** β€” Maintaining mechanisms to inventory and track all AI systems within the organization.
**Decommissioning (1.7)** β€” Establishing processes for the responsible decommissioning of AI systems when they are no longer needed or appropriate.

Govern 2: Accountability Structures

Clear accountability is essential for effective AI risk management:

**Roles and Responsibilities (2.1)** β€” Documenting who is responsible for what aspects of AI risk management, from development through deployment and monitoring.
**Training (2.2)** β€” Ensuring that all personnel involved in AI risk management receive appropriate training on their responsibilities and the organization's AI policies.
**Executive Leadership (2.3)** β€” Establishing executive-level responsibility for AI risk management, ensuring that senior leadership is engaged and accountable.

Govern 3–6: Diversity, Culture, Engagement, and Third Parties

**Workforce Diversity (Govern 3)** β€” Organizations should ensure diverse decision-making teams that reflect the populations affected by AI systems. Accessibility policies should be in place to ensure equitable participation.
**Organizational Culture (Govern 4)** β€” Practices should be monitored for consistency with AI risk management goals. Teams should be committed to a culture of safety, and organizational procedures should cover the full AI lifecycle.
**Stakeholder Engagement (Govern 5)** β€” Organizations should maintain ongoing engagement with relevant AI actors and establish mechanisms for feedback from external stakeholders, including affected communities.
**Third-Party Oversight (Govern 6)** β€” Policies should address oversight of third-party AI entities, including contingency processes for third-party failures or changes in service.
03
Chapter 03

MAP

Identifying and Contextualizing AI Risks

The Map Function

The MAP function ensures that the context in which an AI system operates is thoroughly understood and that risks related to that context are identified. Mapping is a foundational step: without a clear understanding of what an AI system is intended to do, who it affects, and what could go wrong, effective risk measurement and management are impossible.

Mapping should be an ongoing process, not a one-time exercise. As AI systems evolve, are deployed in new contexts, or encounter new data, the risk landscape changes and must be re-evaluated.

Map 1: Establishing Context

The first step in mapping is establishing a thorough understanding of the AI system's context:

**Intended Purposes (1.1)** β€” Document the AI system's intended purposes, potentially beneficial uses, and the specific context in which it will be deployed.
**Interdisciplinary Teams (1.2)** β€” Identify and engage interdisciplinary AI actors who can provide diverse perspectives on the system's risks and benefits.
**Cost-Benefit Assessment (1.3)** β€” Conduct a thorough assessment of the AI system's benefits versus its costs and risks, including non-financial impacts.
**System Requirements (1.4)** β€” Analyze the technical, operational, and organizational requirements for the AI system.
**Risk Tolerances (1.5)** β€” Determine organizational risk tolerances specific to the AI system and its deployment context.
**Known Limitations (1.6)** β€” Document system requirements and known limitations, including conditions under which the system may not perform as intended.

Map 2–5: Categorization, Benefits, Components, and Likelihood

**AI System Categorization (Map 2)** β€” Characterize the AI system's intended tasks, methods, and data. Map information about third-party entities involved. Consider scientific integrity and testing, evaluation, verification, and validation (TEVV) requirements.
**Capabilities and Benefits (Map 3)** β€” Document AI capabilities, targeted usage, goals, and expected benefits. Assess potential costs and risks of both the AI system and non-AI alternatives. Document the targeted application scope.
**Component-Level Risk Mapping (Map 4)** β€” Map risks and benefits for all system components. Identify internal risk controls and approaches for managing technology-specific risks.
**Likelihood and Magnitude (Map 5)** β€” Assess the likelihood and magnitude of each identified risk. Establish practices for identifying and tracking risks over time, including emerging risks that may not have been anticipated during initial design.
04
Chapter 04

MEASURE

Assessing and Analyzing AI Risks

The Measure Function

The MEASURE function focuses on the quantitative and qualitative assessment of identified AI risks. It ensures that organizations have appropriate methods, metrics, and processes for evaluating the trustworthiness of their AI systems. Measurement should be continuous, not just a pre-deployment activity.

Effective measurement requires both technical expertise and domain knowledge. It should involve internal and external experts, use validated methodologies, and produce results that are meaningful and actionable for decision-makers.

Measure 1: Methods and Metrics

**Measurement Approaches (1.1)** β€” Enumerate appropriate methods and metrics for measuring AI risks, including both quantitative metrics (accuracy, precision, recall) and qualitative assessments (stakeholder feedback, expert review).
**Metric Evaluation (1.2)** β€” Evaluate the appropriateness and effectiveness of chosen metrics. Ensure that metrics actually capture the risks they are intended to measure.
**Expert Consultation (1.3)** β€” Consult both internal and external experts to validate measurement approaches and interpret results.

Measure 2: Trustworthiness Evaluation

AI systems should be evaluated against all seven trustworthy AI characteristics:

**Testing Methodologies (2.1)** β€” Develop and apply comprehensive test sets, metrics, and testing methodologies.
**Human Subject Evaluations (2.2)** β€” Conduct evaluations involving human subjects where appropriate, particularly for systems that interact with or affect people.
**Performance Criteria (2.3)** β€” Measure AI system performance against established assurance criteria.
**Deployment Monitoring (2.4)** β€” Monitor the functionality and behavior of deployed AI systems in real-world conditions.
**Safety Assessment (2.5)** β€” Assess AI systems for safety risks, including potential for physical harm, psychological harm, or environmental damage.
**Fairness and Bias (2.6)** β€” Evaluate AI systems for fairness and bias across relevant demographic groups and use cases.
**Security and Resilience (2.7)** β€” Test AI systems for security vulnerabilities and resilience to adversarial attacks.
**Robustness (2.8)** β€” Evaluate robustness and predictability under various conditions, including edge cases.
**Supply Chain Testing (2.9)** β€” Assess the sufficiency of value chain partner testing and evaluation.
**Privacy (2.10)** β€” Evaluate the privacy risk of AI systems, including data collection, retention, and sharing practices.
**Environmental Impact (2.12)** β€” Document the environmental impact of AI systems, including energy consumption and carbon footprint.
**Risk Management Effectiveness (2.13)** β€” Document the overall effectiveness of the organization's AI risk management processes.

Measure 3–4: Tracking and Results

**Risk Tracking (Measure 3)** β€” Establish mechanisms for tracking identified AI risks over time. Define approaches, personnel, and frequency for ongoing monitoring. Connect risk tracking to deployment decisions and establish feedback processes for end users.
**Measurement Results (Measure 4)** β€” Apply measurement approaches consistently and document results, including confidence levels and limitations. Ensure that measurement results are communicated to relevant stakeholders and inform decision-making.
05
Chapter 05

MANAGE

Prioritizing and Acting on AI Risks

The Manage Function

The MANAGE function ensures that identified and measured risks are prioritized and acted upon based on their projected impact. It covers the full spectrum of risk response β€” from acceptance and mitigation to transfer and avoidance. Effective management requires adequate resources, clear decision-making processes, and robust incident response capabilities.

Manage 1: Risk Prioritization and Response

**Purpose Assessment (1.1)** β€” Determine whether the AI system achieves its intended purpose and whether the benefits outweigh the risks.
**Risk Prioritization (1.2)** β€” Prioritize treatment of documented AI risks based on their likelihood, magnitude, and potential impact.
**Response Development (1.3)** β€” Develop specific responses to identified and measured AI risks, including mitigation strategies, contingency plans, and escalation procedures.
**Documentation (1.4)** β€” Document all risk treatments, including response and recovery procedures, and ensure they are accessible to relevant personnel.

Manage 2–4: Strategies, Third Parties, and Monitoring

**Resource Allocation (Manage 2)** β€” Ensure that adequate resources are allocated to manage AI risks. Establish mechanisms for deployment decisions, including go/no-go criteria. Develop procedures for responding to and recovering from AI incidents. Monitor third-party AI risks.
**Third-Party Risk Management (Manage 3)** β€” Regularly monitor AI risks and benefits from third-party resources, including pre-trained models, APIs, and data sources. Establish contractual and technical safeguards for third-party AI dependencies.
**Continuous Improvement (Manage 4)** β€” Implement post-deployment monitoring plans for AI systems. Establish measurable activities for continual improvement of AI risk management processes. Regularly review and update risk management strategies based on new information, incidents, and evolving best practices.
06
Chapter 06

Implementation Guide

Practical Steps for Adopting the AI RMF

Getting Started with the AI RMF

Implementing the AI RMF does not require starting from scratch. Organizations should begin by assessing their current AI risk management practices and identifying gaps relative to the framework's four core functions. Key steps include:

**1. Conduct an AI Inventory** β€” Identify all AI systems currently in use or under development within the organization.
**2. Assess Current Practices** β€” Evaluate existing risk management processes against the GOVERN, MAP, MEASURE, and MANAGE functions.
**3. Identify Gaps** β€” Determine where current practices fall short of the framework's recommendations.
**4. Prioritize Actions** β€” Focus on the highest-risk AI systems and the most critical gaps first.
**5. Establish Governance** β€” Create or strengthen AI governance structures, including roles, responsibilities, and accountability mechanisms.
**6. Iterate and Improve** β€” The AI RMF is designed for continuous improvement. Regularly revisit and update your risk management practices as your AI portfolio and understanding of risks evolve.

NIST AI RMF Playbook

NIST has published a companion Playbook (NIST AI 100-1 Playbook) that provides more detailed, actionable guidance for each subcategory of the framework. The Playbook includes suggested actions, transparency notes, and references to related standards and resources.

The Playbook is available at the NIST AI Resource Center (airc.nist.gov) and is regularly updated with community input. Organizations are encouraged to use the Playbook alongside the framework to develop implementation plans tailored to their specific needs and context.

Relationship to Other Frameworks

The NIST AI RMF is designed to be compatible with and complementary to other AI governance frameworks:

**EU AI Act** β€” The AI RMF's risk-based approach aligns well with the EU AI Act's risk classification system. Organizations subject to the EU AI Act can use the AI RMF to help meet its requirements, particularly around risk assessment, documentation, and monitoring.
**ISO/IEC 42001** β€” While ISO 42001 is a certifiable management system standard and the AI RMF is voluntary guidance, they share many common elements. Organizations pursuing ISO 42001 certification can use the AI RMF to strengthen their risk management practices.
**OECD AI Principles** β€” The AI RMF's seven trustworthy AI characteristics map closely to the OECD's five AI principles, providing a more detailed implementation pathway for organizations committed to the OECD framework.