OECD AI Principles
The Foundation of Global AI Governance
The OECD AI Principles establish a foundation for responsible AI that has influenced virtually every major AI regulation and framework that followed, including the EU AI Act, the NIST AI RMF, and national AI strategies worldwide. They were updated in 2023 and 2024 to address generative AI and systems that continue to evolve after deployment.
Core Values for Responsible AI
AI should benefit people and the planet by driving inclusive growth, sustainable development, and well-being. Stakeholders should proactively engage in responsible stewardship of trustworthy AI in pursuit of beneficial outcomes for people and the planet, including augmenting human capabilities and enhancing creativity, advancing inclusion of underrepresented populations, reducing economic, social, gender, and other inequalities, and protecting natural environments.
AI actors should respect the rule of law, human rights, and democratic values throughout the AI system lifecycle. These include freedom, dignity, autonomy, privacy, and data protection, non-discrimination and equality, diversity, fairness, social justice, and internationally recognized labor rights. To this end, AI actors should implement mechanisms and safeguards, such as capacity for human determination, that are appropriate to the context and consistent with the state of art.
AI actors should commit to transparency and responsible disclosure regarding AI systems. To this end, they should provide meaningful information appropriate to the context and consistent with the state of art, to foster a general understanding of AI systems, to make stakeholders aware of their interactions with AI systems, to enable those affected by an AI system to understand the outcome, and to enable those adversely affected to challenge its outcome based on plain and easy-to-understand information.
AI systems should be robust, secure, and safe throughout their entire lifecycle so that, in conditions of normal use, foreseeable use or misuse, or other adverse conditions, they function appropriately and do not pose unreasonable safety risk. AI actors should ensure traceability, including in relation to datasets, processes, and decisions made during the AI system lifecycle, to enable analysis of the AI system's outcomes and responses to inquiry.
AI actors should be accountable for the proper functioning of AI systems and for the respect of the above principles, based on their roles, the context, and consistent with the state of art. Organizations and individuals developing, deploying, or operating AI systems should be held accountable for their proper functioning in accordance with the above principles.
Guidance for Governments
The OECD also provides five recommendations for governments to promote and implement the AI principles: