Quick Read
ISO 42001:2023 is a management system standard that specifies how organisations should govern their AI activities—through policies, risk assessments, and controls—rather than prescribing how to build AI systems technically. The standard applies across all AI lifecycle roles (developers, deployers, service providers, procurers) and uses the same harmonised structure as ISO 9001, 14001, and 27001, with requirements scaled to organisational context. Top management commitment and clear scope definition are foundational, as the standard requires organisations to understand their internal and external AI context and identify stakeholder needs.
Executive Summary
ISO/IEC 42001:2023 is the world’s first international management system standard for artificial intelligence. Published in December 2023, it provides a systematic, certifiable framework for any organisation that uses, develops, monitors or provides AI-powered products and services. This whitepaper unpacks the standard clause by clause, explains what a conforming AI Management System (AIMS) looks like in practice, describes the normative controls in Annex A, and identifies the most common implementation gaps. For organisations beginning the journey toward ISO 42001 certification, this paper provides the essential orientation.
What ISO 42001 Is — and Is Not
ISO/IEC 42001:2023 is a management system standard, not a technical standard. This distinction matters enormously in practice. A technical standard specifies how an AI system should be built — what algorithms to use, what accuracy thresholds to meet, what data formats to employ. A management system standard specifies how an organisation should manage its AI activities: what policies to set, what risks to assess, what controls to implement, how to monitor performance, and how to improve over time.
This means that ISO 42001 is technology-neutral and use-case agnostic. It does not matter whether an organisation is using large language models, computer vision systems, predictive analytics or robotic process automation. The management system requirements are the same. What changes is the specific content of the risk assessments, the specific controls selected from Annex A, and the specific objectives set for performance evaluation.
ISO 42001 applies to any organisation that plays a role in the AI lifecycle, whether as a developer of AI systems, a deployer of AI systems, a provider of AI-powered products or services, or an organisation that monitors or procures AI from third parties. It is scalable — a multinational corporation and a ten-person technology company can both certify, with the management system sized appropriately to the organisation’s context and the AI systems within scope.
A Critical Clarification ISO 42001 certification applies to an organisation’s management system, not to individual AI systems. Certification means that the organisation has established a systematic, documented, and independently audited process for managing AI responsibly. It does not mean that every individual AI system has been approved or validated. This distinction is important when communicating with boards, regulators, and customers. |
|---|
The Structure of the Standard: Clauses 4 to 10
ISO 42001 uses the ISO harmonised structure (Annex SL), shared by management system standards including ISO 9001 (quality), ISO 14001 (environment), ISO 27001 (information security) and ISO 45001 (occupational health and safety). Clauses 1 to 3 cover scope, normative references and definitions. The substantive requirements are in clauses 4 to 10, with controls in normative Annex A.
Clause | Title | Core Requirement |
|---|---|---|
4 | Context of the Organisation | Define the internal and external context of AI use, identify interested parties, determine the scope of the AIMS, and establish the management system boundary. |
5 | Leadership | Top management must demonstrate commitment, establish an AI policy, and assign roles and responsibilities for the AIMS. |
6 | Planning | Identify AI-related risks and opportunities, conduct AI risk assessments and AI system impact assessments, set objectives, and plan how to achieve them. |
7 | Support | Provide necessary resources, ensure competence of personnel, raise awareness, establish communication processes, and maintain documented information. |
8 | Operation | Implement risk assessment and treatment plans, conduct operational AI risk assessments and impact assessments, and manage AI system activities. |
9 | Performance Evaluation | Monitor, measure, analyse and evaluate AIMS performance; conduct internal audits; carry out management reviews. |
10 | Improvement | Address nonconformities with corrective action; drive continual improvement of the AIMS. |
Clause 4: Context of the Organisation
Clause 4 is where the AIMS begins. Organisations must understand their internal context (culture, governance, strategies, objectives, capabilities) and external context (legal, regulatory, technological, social and market environment) as they relate to AI. They must identify interested parties — those who have a stake in the organisation’s AI activities, including employees, customers, regulators, suppliers and communities — and understand their needs and expectations.
Determining the scope of the AIMS is one of the most consequential decisions an organisation makes. The scope defines which AI systems, processes, business units and geographies are covered by the management system. A narrow scope may be appropriate when beginning the certification journey; it can be expanded over time as the AIMS matures.
Clause 5: Leadership
ISO 42001 is explicit that AI governance cannot be delegated entirely to a technical team. Top management — defined as the person or group that directs and controls an organisation at the highest level — must demonstrate leadership and commitment to the AIMS. This means ensuring the AI policy is established and communicated, ensuring the AIMS is integrated into business processes, providing resources, and directing and supporting personnel.
The AI policy required by Clause 5.2 is a high-level statement of the organisation’s approach to responsible AI. It should express commitment to the development and use of trustworthy AI, compliance with applicable legal and regulatory requirements, and continual improvement of the AIMS. It provides the principles that guide more specific policies, objectives and controls throughout the system.
Clause 6: Planning — The Heart of the AIMS
Clause 6 is arguably the most substantive in the standard. It requires organisations to identify risks and opportunities related to AI, conduct formal AI risk assessments, develop AI risk treatment plans, conduct AI system impact assessments, and set specific, measurable AI objectives.
The AI risk assessment (Clause 6.1.2) requires organisations to define criteria for evaluating and accepting AI risk, identify sources of risk relevant to their AI systems, analyse the likelihood and consequence of those risks, and evaluate which risks require treatment. Risks to consider include technical risks (model failure, adversarial attacks, data quality issues), operational risks (misuse, inadequate human oversight), organisational risks (skills gaps, third-party AI dependencies) and societal risks (bias, discrimination, harm to individuals or communities).
The AI system impact assessment (Clause 6.1.4) is distinct from the risk assessment. It requires organisations to assess the potential impacts of specific AI systems — on individuals, groups, society and the environment — before those systems are deployed or significantly modified. This is analogous to a data protection impact assessment (DPIA) in privacy terms, but broader in scope.
Clause 7: Support
A management system is only as good as the human and material resources that support it. Clause 7 requires organisations to determine and provide the resources needed for the AIMS — budget, tools, personnel and infrastructure. It requires organisations to ensure that personnel whose work affects AI performance are competent in terms of education, training and experience, and to document that competence.
Awareness is particularly important in AI governance. Personnel across the organisation need to understand the AI policy, their contribution to AIMS effectiveness, and the implications of non-conformity. This goes beyond the technical team — it extends to business managers, product owners, legal and compliance teams, and senior leadership.
Clause 8: Operation
Clause 8 is where the AIMS moves from planning to doing. Organisations must implement their operational plans and controls, conduct AI risk assessments and impact assessments in practice (not just as planning exercises), and manage the operational activities associated with their AI systems. This includes procurement and supplier management — where AI systems are obtained from third parties, the organisation must ensure that its AIMS requirements flow appropriately through the supply chain.
Clause 9: Performance Evaluation
Clause 9 closes the plan-do loop. Organisations must determine what needs to be monitored and measured regarding AIMS performance and AI system behaviour, establish methods for monitoring and measurement, and evaluate results. Internal audits must be conducted at planned intervals to determine whether the AIMS conforms to requirements and is effectively implemented. Management reviews must take place at the highest level to assess AIMS performance and drive continual improvement.
Clause 10: Improvement
When the AIMS does not perform as expected — when there is a nonconformity, an AI incident, or an audit finding — Clause 10 requires a structured response. Organisations must react to the nonconformity, determine its cause, evaluate whether similar nonconformities exist elsewhere, and implement corrective action to prevent recurrence. Beyond reactive correction, Clause 10 also requires proactive, continual improvement of AIMS suitability, adequacy and effectiveness.
Annex A: The Normative Controls
Annex A is normative — it is not optional guidance but an integral part of the standard. It provides 38 controls across nine control categories that address AI-specific risks. Organisations must determine which controls are applicable to their context and implement them, or document justified exclusions.
The nine control categories in Annex A are: Policies for AI (A.2), Internal organisation (A.3), Resources for AI systems (A.4), Assessing impacts of AI systems (A.5), AI system lifecycle (A.6), Human oversight and determination controls (A.7), Documentation of AI systems (A.8), Information for interested parties of AI systems (A.9), and Use of AI systems (A.10).
Key Annex A Controls in Practice Among the most commonly invoked Annex A controls are: A.4 (data quality and management), A.5.4 (AI system impact assessment), A.6 (AI development and testing controls across the lifecycle), A.7 (human oversight mechanisms — defining when and how humans must be in the loop), A.8 (documentation requirements for AI systems), and A.10 (acceptable use policies and user information). These controls form the operational heart of a well-implemented AIMS. |
|---|
Common Implementation Gaps
Organisations attempting to implement ISO 42001 for the first time typically encounter a predictable set of gaps. Understanding these in advance can save significant time and effort in the certification journey.
Scope gaps. The AI inventory is incomplete.
Many organisations underestimate the number of AI systems in use, particularly where AI is embedded in commercial software, SaaS platforms, or vendor-supplied analytics tools. A thorough AI inventory — covering all systems that generate outputs influencing decisions — is essential before scope can be properly defined.
Boilerplate risk assessments. Risk assessments are generic rather than AI-specific.
Organisations often apply standard IT risk assessment templates to AI systems, missing the distinctive risks of machine learning: distributional shift, model drift, training data bias, adversarial vulnerability and lack of explainability. ISO 42001 requires AI-specific risk assessment criteria.
Impact assessment confusion. Impact assessments are confused with risk assessments.
The AI system impact assessment (ASIA) required by Clause 6.1.4 is distinct from the AI risk assessment. It focuses on potential impacts on people, society and the environment, not on the risks to the organisation. Many first-time implementers conflate the two.
Oversight gaps. Human oversight is poorly defined.
Control A.7 requires organisations to define the level and nature of human oversight for each AI system. Many organisations deploy AI systems without clear policies on when humans must review, override or escalate AI outputs. This is a frequently cited gap in both internal audits and certification assessments.
Third-party AI. Supplier AI is not addressed.
Where AI systems are procured from vendors or embedded in cloud services, organisations must ensure that their AIMS requirements are appropriately reflected in supplier relationships. Treating third-party AI as outside the management system is a common and significant gap.
Building an AIMS: A Practical Starting Point
For organisations starting from scratch, a pragmatic implementation sequence runs as follows. First, establish context and scope — conduct an AI inventory, identify interested parties, and define the boundaries of the AIMS. Second, secure leadership commitment — obtain executive sponsorship, establish the AI policy, and assign roles. Third, conduct AI risk assessments and impact assessments for the AI systems in scope. Fourth, select and implement Annex A controls based on the risk assessment results. Fifth, establish monitoring and measurement processes, including an internal audit programme. Sixth, conduct a management review. Seventh, address any gaps identified through internal audit before engaging a certification body.
Most organisations find that an AIMS can be designed and ready for Stage 1 audit within six to twelve months, depending on the number and complexity of AI systems in scope, the maturity of existing governance processes, and the resources dedicated to implementation. Organisations already certified to ISO 27001 or ISO 9001 typically move faster, given the structural overlap.
The Role of Speeki
Speeki provides ISO 42001 certification services for organisations at all stages of their AIMS journey. Our Stage 1 audit reviews documentation and readiness; our Stage 2 audit assesses the actual implementation and effectiveness of the management system. We then issue certification and conduct annual surveillance audits to maintain it.
We also offer pre-assessment services for organisations that want an independent view of their AIMS readiness before formal certification, and gap analysis support for organisations in the early stages of implementation. Our team combines deep expertise in ISO management system auditing with practical knowledge of AI technologies and the regulatory environment.
Conclusion
ISO/IEC 42001:2023 is not a bureaucratic hurdle or a compliance checkbox. It is a systematic framework for building the kind of AI governance that organisations, their customers, and their stakeholders actually need: one that is documented, risk-based, continually improved, and independently verified. Understanding the standard clause by clause — and the Annex A controls that operationalise it — is the essential first step in building an AI management system that stands up to scrutiny.
The next paper in this series, Whitepaper 3, focuses on AI risk management in depth — connecting ISO 42001’s risk framework with the NIST AI RMF and providing practical guidance on building an AI risk register and assessment process.
Speeki
Speeki is an ISO certification body specialising in AI management systems certification under ISO/IEC 42001:2023. We help organisations design, implement and certify AI governance programs that meet international standards and build stakeholder trust.
Visit speeki.com to learn more about ISO 42001 certification, or contact our team to discuss your AI governance journey.