NIST AI Risk Management Framework 1.0
Understanding the AI Risk Management Framework
The framework was developed through extensive multi-stakeholder collaboration, drawing input from the private sector, academia, civil society, and government. It is designed to be used alongside existing risk management processes and can be integrated into an organization's broader enterprise risk management strategy.
Importantly, the AI RMF is not a compliance checklist. It is a living document meant to evolve alongside AI technology and the understanding of AI risks. Organizations are encouraged to adapt the framework to their specific context, risk tolerance, and operational needs.
The AI RMF identifies seven key characteristics that define trustworthy AI systems. These characteristics are interconnected and should be considered holistically:
Cultivating a Culture of AI Risk Management
Effective governance ensures that AI risk management is not an afterthought but is embedded into the organization's DNA. This includes establishing clear accountability structures, defining risk tolerances, ensuring workforce diversity, and maintaining robust oversight of third-party AI systems.
Organizations should establish comprehensive policies, processes, procedures, and practices for AI risk management. This includes:
Clear accountability is essential for effective AI risk management:
Identifying and Contextualizing AI Risks
The MAP function ensures that the context in which an AI system operates is thoroughly understood and that risks related to that context are identified. Mapping is a foundational step: without a clear understanding of what an AI system is intended to do, who it affects, and what could go wrong, effective risk measurement and management are impossible.
Mapping should be an ongoing process, not a one-time exercise. As AI systems evolve, are deployed in new contexts, or encounter new data, the risk landscape changes and must be re-evaluated.
The first step in mapping is establishing a thorough understanding of the AI system's context:
Assessing and Analyzing AI Risks
The MEASURE function focuses on the quantitative and qualitative assessment of identified AI risks. It ensures that organizations have appropriate methods, metrics, and processes for evaluating the trustworthiness of their AI systems. Measurement should be continuous, not just a pre-deployment activity.
Effective measurement requires both technical expertise and domain knowledge. It should involve internal and external experts, use validated methodologies, and produce results that are meaningful and actionable for decision-makers.
AI systems should be evaluated against all seven trustworthy AI characteristics:
Prioritizing and Acting on AI Risks
Practical Steps for Adopting the AI RMF
Implementing the AI RMF does not require starting from scratch. Organizations should begin by assessing their current AI risk management practices and identifying gaps relative to the framework's four core functions. Key steps include:
NIST has published a companion Playbook (NIST AI 100-1 Playbook) that provides more detailed, actionable guidance for each subcategory of the framework. The Playbook includes suggested actions, transparency notes, and references to related standards and resources.
The Playbook is available at the NIST AI Resource Center (airc.nist.gov) and is regularly updated with community input. Organizations are encouraged to use the Playbook alongside the framework to develop implementation plans tailored to their specific needs and context.
The NIST AI RMF is designed to be compatible with and complementary to other AI governance frameworks: