๐Ÿ›๏ธInternational (OECD + G20) ยท Principles

OECD AI

OECD AI Principles

Published 2019 (updated 2024)
01
Chapter 01

Overview & History

The Foundation of Global AI Governance

The OECD AI Principles

In May 2019, the Organisation for Economic Co-operation and Development (OECD) adopted the first intergovernmental standard on AI โ€” the OECD Recommendation on Artificial Intelligence. These principles were subsequently endorsed by the G20, making them the most widely adopted international AI governance framework.

The OECD AI Principles establish a foundation for responsible AI that has influenced virtually every major AI regulation and framework that followed, including the EU AI Act, the NIST AI RMF, and national AI strategies worldwide. They were updated in 2023 and 2024 to address generative AI and systems that continue to evolve after deployment.

02
Chapter 02

The Five Principles

Core Values for Responsible AI

Principle 1: Inclusive Growth, Sustainable Development, and Well-Being

AI should benefit people and the planet by driving inclusive growth, sustainable development, and well-being. Stakeholders should proactively engage in responsible stewardship of trustworthy AI in pursuit of beneficial outcomes for people and the planet, including augmenting human capabilities and enhancing creativity, advancing inclusion of underrepresented populations, reducing economic, social, gender, and other inequalities, and protecting natural environments.

Principle 2: Human Rights, Democracy, and Rule of Law

AI actors should respect the rule of law, human rights, and democratic values throughout the AI system lifecycle. These include freedom, dignity, autonomy, privacy, and data protection, non-discrimination and equality, diversity, fairness, social justice, and internationally recognized labor rights. To this end, AI actors should implement mechanisms and safeguards, such as capacity for human determination, that are appropriate to the context and consistent with the state of art.

Principle 3: Transparency and Explainability

AI actors should commit to transparency and responsible disclosure regarding AI systems. To this end, they should provide meaningful information appropriate to the context and consistent with the state of art, to foster a general understanding of AI systems, to make stakeholders aware of their interactions with AI systems, to enable those affected by an AI system to understand the outcome, and to enable those adversely affected to challenge its outcome based on plain and easy-to-understand information.

Principle 4: Robustness, Security, and Safety

AI systems should be robust, secure, and safe throughout their entire lifecycle so that, in conditions of normal use, foreseeable use or misuse, or other adverse conditions, they function appropriately and do not pose unreasonable safety risk. AI actors should ensure traceability, including in relation to datasets, processes, and decisions made during the AI system lifecycle, to enable analysis of the AI system's outcomes and responses to inquiry.

Principle 5: Accountability

AI actors should be accountable for the proper functioning of AI systems and for the respect of the above principles, based on their roles, the context, and consistent with the state of art. Organizations and individuals developing, deploying, or operating AI systems should be held accountable for their proper functioning in accordance with the above principles.

03
Chapter 03

Policy Recommendations

Guidance for Governments

Recommendations for National AI Policies

The OECD also provides five recommendations for governments to promote and implement the AI principles:

**Investing in AI R&D** โ€” Governments should consider long-term public investment in AI research and development, including interdisciplinary efforts, to spur innovation in trustworthy AI.
**Fostering a Digital Ecosystem** โ€” Governments should foster the development of, and access to, a digital ecosystem for trustworthy AI, including data, computing infrastructure, and mechanisms for sharing AI knowledge.
**Shaping an Enabling Policy Environment** โ€” Governments should promote a policy environment that supports an agile transition to AI, including through experimentation and sandboxes.
**Building Human Capacity** โ€” Governments should empower people with the skills for AI and support workers for a fair transition, including through social dialogue.
**International Cooperation** โ€” Governments should actively cooperate to advance responsible AI, share best practices, and work toward interoperability of AI governance frameworks.