Quick Read

ISO/IEC 42001:2023 is the first international management system standard designed specifically for AI governance, covering the full AI lifecycle from planning through decommissioning and addressing risk management, data quality, transparency, and accountability through a certifiable framework. The whitepaper surveys five major AI governance frameworks and argues that ISO 42001 provides the only approach combining systemic rigour, third-party verifiability, and compatibility with other international guidance, making it the natural starting point for organisations seeking to demonstrate responsible AI management. As AI deployment accelerates across sectors, the fragmented regulatory landscape has created confusion for leaders tasked with governance; ISO 42001 offers a unified, auditable standard that bridges this gap.

Executive Summary

Organisations deploying artificial intelligence face a fragmented governance landscape. Multiple overlapping frameworks — ISO/IEC 42001:2023, the NIST AI Risk Management Framework, the EU AI Act, OECD Principles on AI, and the WEF Board Oversight Toolkit — each address aspects of responsible AI, but none provides a complete, independently verifiable management system on its own. This whitepaper maps the major frameworks, explains how they relate to each other, and makes the case for ISO 42001 as the management system backbone that allows organisations to satisfy multiple frameworks efficiently and demonstrate trustworthy AI through independent certification.

Introduction: A Governance Problem of Our Time

Artificial intelligence is being deployed across every sector of the global economy. Systems that make decisions about credit, healthcare, employment, infrastructure and national security are increasingly powered by machine learning models that are opaque, adaptive, and capable of behaving in ways their designers did not anticipate. The societal stakes have never been higher.

Yet the governance response has been fragmented. Regulators, standards bodies, industry groups and intergovernmental organisations have each responded with their own frameworks, principles, laws and guidelines. As a result, a Chief Information Officer or General Counsel tasked with governing AI responsibly in 2025 faces a genuinely bewildering array of overlapping guidance. The question is no longer whether AI needs governance — that debate is settled — but which governance approach actually works, and how organisations can demonstrate that their AI systems are responsibly managed.

This whitepaper surveys the most significant AI governance frameworks currently in play and explains how they relate to each other. It makes the case that ISO/IEC 42001:2023 — the international standard for AI management systems — provides the only framework that combines systemic rigour, third-party verifiability, and compatibility with the full range of other international guidance. For organisations serious about AI governance, it is the natural starting point.

The Major Frameworks: What They Are and What They Do

Five frameworks shape the global conversation on AI governance. Understanding each — its origin, audience, scope, and limitations — is essential before organisations can make informed decisions about which to adopt.

ISO/IEC 42001:2023 — The AI Management System Standard

Published in December 2023 by the International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC), ISO 42001 is the first international management system standard specifically designed for artificial intelligence. Developed by Joint Technical Committee ISO/IEC JTC 1, Subcommittee SC 42, it applies the harmonised structure common to management system standards such as ISO 9001 (quality) and ISO 27001 (information security).

ISO 42001 requires organisations to establish, implement, maintain and continually improve an AI Management System (AIMS). It covers the full AI lifecycle — from initial planning and design through deployment, monitoring and decommissioning — and addresses concerns including risk management, data quality, transparency, accountability, human oversight, and fairness. Critically, it is certifiable: organisations can engage an accredited third-party certification body to conduct an independent audit and issue formal certification.

The standard applies to any organisation that uses, develops, monitors or provides products and services that involve AI systems, regardless of size, sector or geography. Its structure — clauses 4 through 10 — covers context, leadership, planning, support, operation, performance evaluation, and improvement. Annex A provides normative controls, Annex B offers implementation guidance, Annex C addresses AI-specific risk sources, and Annex D covers sector application.

NIST AI Risk Management Framework (AI RMF 1.0)

Published by the US National Institute of Standards and Technology in January 2023, the NIST AI RMF is a voluntary, use-case agnostic framework designed to help organisations manage the risks of AI systems. It is structured around four core functions: GOVERN, MAP, MEASURE, and MANAGE.

GOVERN establishes the organisational structures, policies and culture for AI risk management. MAP identifies the context in which AI systems will be used and the risks associated with them. MEASURE develops methods and metrics to assess those risks. MANAGE implements responses to identified risks and monitors outcomes. These four functions are designed to be applied iteratively and at multiple points in the AI lifecycle.

The AI RMF is explicitly voluntary and non-sector-specific. It is intended to be flexible and adaptable, and has been widely adopted by US federal agencies and private organisations. It does not, however, provide for third-party certification. Organisations can self-assess against the framework but cannot obtain independent verification of their AI risk management practices under the RMF alone.

The EU AI Act

The EU AI Act, formally adopted in 2024, is a binding legal regulation that classifies AI systems by risk level and imposes corresponding obligations on providers and deployers. Systems are categorised as unacceptable risk (prohibited), high risk (subject to stringent pre-market conformity assessment), limited risk (transparency obligations), or minimal risk (voluntary codes of conduct).

High-risk AI systems — those used in critical infrastructure, employment, essential private services, education, law enforcement, migration, and justice — must meet specific requirements before they can be placed on the EU market. These include risk management systems, data governance measures, technical documentation, transparency obligations, human oversight provisions, accuracy and robustness requirements, and cybersecurity measures.

The EU AI Act is the most prescriptive of the major frameworks. It imposes real legal obligations with significant penalties for non-compliance — up to 35 million euros or 7% of global annual turnover. However, it is geographically scoped (applying to systems placed on the EU market), risk-level scoped (focusing on high-risk systems), and is not a management system standard. It tells organisations what they must not do and what specific systems must achieve, but does not provide a comprehensive framework for managing AI across an organisation.

OECD Principles on Artificial Intelligence

Adopted in 2019 and updated in 2024, the OECD Principles on AI represent a consensus among 46 governments on the values that AI systems should embody. The five principles are: inclusive growth, sustainable development and well-being; human-centred values and fairness; transparency and explainability; robustness, security and safety; and accountability.

The OECD Principles are high-level, principled guidance rather than operational requirements. They have been influential in shaping national AI strategies, regulations and voluntary frameworks worldwide — the EU AI Act and NIST AI RMF both draw on them. But they do not provide the procedural specificity needed for implementation, and they are not certifiable.

WEF Empowering AI Leadership: Board Oversight Toolkit

Developed by the World Economic Forum in collaboration with Accenture, BBVA, IBM, Saudi Aramco and others, the WEF Board Oversight Toolkit is designed for boards of directors rather than operational teams. It provides 13 modules aligned to traditional board committee structures, covering strategy oversight (brand, competition, customers, operations, people and culture, technology, sustainable development, cybersecurity) and control oversight (ethics, governance, risk, audit, board responsibilities).

The WEF toolkit is a governance resource, not a management system standard. It equips directors to ask the right questions and understand their oversight responsibilities with respect to AI, but it does not specify how the underlying AI management system should be designed, implemented or monitored. It is best understood as complementary to ISO 42001 rather than an alternative to it.

Mapping the Landscape: How the Frameworks Compare

The table below summarises how the five major frameworks compare across key dimensions relevant to organisational AI governance.

Framework

Type

Scope

Certifiable?

Primary Focus

ISO 42001

Intl. Standard

Global, all sectors

✔ Yes

Full AI management system lifecycle

NIST AI RMF

Voluntary Framework

US-led, global uptake

✘ No

Risk identification & mitigation

EU AI Act

Binding Regulation

EU market, high-risk AI

Partial (conformity)

Legal compliance by risk category

OECD Principles

Principles / Policy

46 governments

✘ No

Values alignment & policy guidance

WEF Toolkit

Governance Resource

Board level

✘ No

Board oversight & accountability

Why One Framework Is Not Enough

Organisations that adopt only a single framework inevitably encounter gaps. The NIST AI RMF provides excellent risk management guidance but lacks a mechanism for independent verification. The EU AI Act creates legal obligations but only for a subset of AI systems and only in specific geographies. The OECD Principles are inspirational but not operational. The WEF toolkit equips boards but does not reach into the management and operational layers of an organisation.

More importantly, different stakeholders demand assurance from different frameworks. A regulator in the EU will expect conformity with the EU AI Act. An international procurement team may require OECD alignment. A board of directors will want to understand its AI oversight responsibilities in WEF terms. Customers and investors increasingly expect credible, third-party verified evidence of responsible AI practice. No single framework satisfies all of these audiences — which is precisely why a management system backbone is needed.

The Convergence Problem

A 2024 survey of GRC professionals found that organisations were simultaneously referencing an average of 3.2 AI governance frameworks. In the absence of a unifying management system, this creates duplicated effort, inconsistent implementation, and governance gaps where frameworks overlap ambiguously. ISO 42001 provides the structural skeleton onto which other frameworks can be mapped, eliminating duplication and ensuring nothing falls through the cracks.

Global Regulatory Divergence: Three Different Bets

Organisations operating internationally face a governance challenge that goes beyond framework proliferation: the major regulatory powers have taken genuinely different approaches to AI governance, and those differences are not converging. Understanding the three dominant regulatory postures is essential for any organisation seeking a governance approach that holds up across geographies.

The United States: Voluntary, Sector-Led

The United States has pursued a largely voluntary, sector-specific approach to AI governance. The NIST AI RMF is advisory; there is no comprehensive federal AI regulation equivalent to the EU AI Act. Instead, sector regulators — the FDA, FTC, OCC, and others — are incorporating AI considerations into existing supervisory frameworks on a sector-by-sector basis. The Biden administration’s Executive Order on AI (October 2023) directed federal agencies to develop sector-specific guidance, and the Trump administration’s subsequent executive orders have maintained a pro-innovation posture focused on voluntary standards and international competitiveness. The practical implication is that US-headquartered organisations face a patchwork of guidance rather than a single unified framework, with enforcement actions more likely to come through existing consumer protection, civil rights, and sector-specific channels than through AI-specific legislation.

The European Union: Risk-Tiered, Mandatory

The EU AI Act, fully entered into force in 2024, represents the world’s most comprehensive binding AI regulation. Its risk-tiered approach — prohibiting certain AI applications outright, imposing stringent pre-market conformity requirements on high-risk systems, and applying lighter transparency obligations to limited-risk systems — creates real legal obligations for any organisation placing AI systems on the EU market. The high-risk category is broad, covering AI used in employment, education, credit, essential services, law enforcement, migration and justice. Penalties are material: up to 35 million euros or 7% of global annual turnover. The EU approach favours precaution and rights protection over speed to market.

China: Algorithm Registration, Content Governance

China has pursued a different model focused on algorithmic accountability and content governance. The Provisions on the Management of Algorithmic Recommendations (2022), the Interim Measures for the Management of Generative AI Services (2023), and a series of sector-specific regulations create obligations for algorithm providers to register their systems, conduct security assessments, and ensure that AI-generated content does not contradict official positions or spread misinformation. The emphasis is on social stability and state oversight rather than on individual rights or market-level risk classification. Organisations with AI services targeting Chinese users or operating through Chinese platforms must navigate this distinct regulatory environment independently from their EU or US compliance programmes.

The Multi-Jurisdictional Challenge

An organisation with operations in the US, EU, and China effectively faces three different governance regimes simultaneously: a voluntary-but-sector-watched US environment, a rights-based mandatory EU regime, and a content-and-stability-focused Chinese regime. ISO 42001’s management system structure — which documents policies, controls, and evidence systematically — provides the only practical foundation for demonstrating governance competence across all three without tripling the compliance workload.

The Commercial Case: Certification as Competitive Advantage

AI governance is often framed as a risk management discipline — something organisations do to avoid harm, satisfy regulators, and prevent reputational damage. This framing is accurate but incomplete. For organisations that get AI governance right, certification is increasingly a source of competitive advantage, not just a compliance obligation.

Enterprise Procurement

Large enterprises, financial institutions, healthcare systems, and government bodies are increasingly including AI governance requirements in their procurement processes. Vendors bidding for AI-related contracts face questions about their AI risk management practices, ethical AI policies, and third-party assurance mechanisms. ISO 42001 certification provides a credible, standardised response to these requirements — much as ISO 27001 became a de facto prerequisite for enterprise information security procurement over the past decade. Certified organisations can close enterprise deals that non-certified competitors cannot, and can reduce the procurement friction that slows sales cycles in risk-conscious sectors.

Talent and Culture

AI professionals — data scientists, machine learning engineers, ethicists and governance specialists — increasingly want to work for organisations with credible AI ethics and governance practices. The internal workforce trust gap is itself a governance risk: when employees believe that their organisation’s AI systems are being deployed irresponsibly, they disengage, leave, or fail to raise concerns through internal channels. Certification signals to prospective and current employees that the organisation takes AI governance seriously, that there are documented processes for raising and resolving ethical concerns, and that governance is embedded in how work gets done rather than bolted on as a compliance exercise.

Capital and Insurance Markets

ESG-focused investors and institutional asset managers are beginning to ask AI-specific governance questions as part of their due diligence processes. Insurers underwriting AI-related liability are developing underwriting frameworks that consider the quality of an organisation’s AI governance. In both cases, ISO 42001 certification — as an independently verified, internationally recognised management system standard — provides a credibility signal that self-assessment or policy documents alone cannot. As these markets develop their AI governance assessment frameworks, certified organisations will have a material advantage.

Beyond Compliance

Organisations that treat AI governance purely as a compliance cost miss the strategic opportunity. ISO 42001 certification is increasingly being recognised by procurement teams, investment analysts, and sector regulators as a meaningful differentiator — a signal that AI governance is systematic, embedded, and independently verified rather than aspirational. The organisations that move earliest to certification in their sector will capture these advantages first.

ISO 42001 as the Governance Backbone

ISO 42001 is uniquely positioned to serve as the backbone of an organisation’s AI governance approach for three reasons: its management system structure, its compatibility with other frameworks, and its certifiability.

Management System Structure

Unlike principles-based or risk-category approaches, ISO 42001 requires organisations to establish a complete management system for AI. This means defining the scope of AI activities, establishing an AI policy, assigning roles and responsibilities, conducting risk assessments, implementing controls, monitoring performance, and driving continual improvement. This structure ensures that AI governance is systematic, documented, and embedded in organisational processes rather than treated as a one-time compliance exercise.

Cross-Framework Compatibility

ISO 42001 applies the ISO harmonised structure, which means it uses identical clause numbers, titles, and core definitions to other management system standards. Organisations already certified to ISO 27001 or ISO 9001 will find significant structural overlap, enabling an integrated management system approach. Moreover, ISO 42001’s Annex A controls address many of the specific requirements of the NIST AI RMF, EU AI Act, and OECD Principles, meaning that organisations implementing a robust AIMS are simultaneously building evidence of alignment with multiple frameworks. Rather than starting from scratch for each framework, organisations can work from ISO 42001 and map outward.

Third-Party Verification and Certification

Perhaps most importantly, ISO 42001 is certifiable. Organisations that implement a conforming AI management system can be audited by an accredited certification body and receive formal third-party certification. This provides something that no other AI governance framework currently offers: an independently verified, internationally recognised credential that organisations can use to demonstrate responsible AI to customers, regulators, investors, and partners.

In an environment where AI governance claims are increasingly scrutinised, certification provides a credibility signal that self-declaration or framework alignment alone cannot match. It gives boards, executives, and external stakeholders confidence that the governance system is real, systematic, and independently assessed — not merely aspirational.

Practical Implications: A Path Forward

For organisations seeking to navigate the AI governance landscape, a pragmatic approach has three elements. First, adopt ISO 42001 as the structural framework for AI governance. This provides the management system backbone, creates the documentation and processes needed for certification, and ensures systematic coverage of AI risks and impacts across the organisation.

Second, map the other frameworks onto ISO 42001. The NIST AI RMF’s GOVERN-MAP-MEASURE-MANAGE functions map naturally onto ISO 42001’s planning, operation, and performance evaluation clauses. The EU AI Act’s high-risk requirements can be addressed through ISO 42001’s risk assessment and control implementation processes. The OECD Principles provide value guidance that should be reflected in the organisation’s AI policy. The WEF toolkit informs the board-level governance and oversight elements of Clause 5 (Leadership).

Third, pursue certification. Engaging an accredited certification body to conduct a Stage 1 (documentation review) and Stage 2 (implementation audit) assessment provides the independent verification that the AI management system is not only designed but actually operating effectively. Certification should be understood not as a destination but as an ongoing process — surveillance audits and recertification cycles drive the continual improvement that a maturing AI governance program requires.

The Role of Speeki

Speeki is an ISO certification body with expertise in AI management systems certification under ISO/IEC 42001:2023. We work with organisations across all sectors to help them understand the standard, build conforming AI management systems, and progress through the certification process with confidence.

As an accredited certification body, Speeki provides independent, third-party audits that give organisations — and their stakeholders — credible assurance that their AI governance approach is systematic, effective and continuously improving. Our team combines deep expertise in ISO management system standards with practical understanding of AI technologies and the broader regulatory environment.

This whitepaper is the first in an eight-part series from Speeki on AI governance. Subsequent papers cover ISO 42001 in depth, AI risk management in practice, the board’s role in AI oversight, the certification journey, the Speeki AI Governance Maturity Model, governing generative and large language model AI, and AI governance disclosure and reporting. Each paper is designed to be read independently, but together they provide a comprehensive guide to building and certifying an AI management system.

Conclusion

The AI governance landscape is complex, fragmented, and rapidly evolving. No single framework is sufficient to address the full range of governance requirements facing organisations deploying AI in 2025 and beyond. But ISO/IEC 42001:2023 provides something unique: a comprehensive, certifiable management system standard that serves as a backbone onto which other frameworks can be mapped.

Organisations that build their AI governance around ISO 42001 are not choosing one framework over others. They are choosing a structural approach that makes every other framework easier to implement, more systematic in application, and verifiable by an independent third party. In an era of intense scrutiny on AI, that structural rigour and independent credibility is increasingly the price of trust.

Speeki

Speeki is an ISO certification body specialising in AI management systems certification under ISO/IEC 42001:2023. We help organisations design, implement and certify AI governance programs that meet international standards and build stakeholder trust.

Visit speeki.com to learn more about ISO 42001 certification, or contact our team to discuss your AI governance journey.