Quick Read
The Speeki AI Governance Maturity Model provides organisations with a diagnostic framework to assess their AI governance capability across seven dimensions—leadership, risk management, accountability, human oversight, transparency, monitoring, and ethical performance—recognizing that governance exists on a spectrum rather than as a binary state. Drawing on established maturity model approaches from IT governance and information security, the model serves three purposes: identifying current gaps, charting a progression pathway aligned with ISO 42001 and NIST AI RMF, and enabling sector-level benchmarking as Speeki accumulates certification data. The model makes visible the specific capabilities organisations need to develop to advance their AI governance maturity.
Executive Summary
Most organisations deploying artificial intelligence know that their AI governance could be better. What they lack is a clear picture of where they are, where they need to be, and what the path between those points looks like. The Speeki AI Governance Maturity Model provides exactly that: a five-level framework that allows organisations to assess their current AI governance maturity across seven key dimensions, understand what each level means in practice, and chart a practical path toward the systematic, independently certified AI governance that global best practice requires. The model synthesises requirements from ISO/IEC 42001:2023, the NIST AI RMF, the EU AI Act, OECD Principles on AI, and the WEF Board Oversight Toolkit into a single diagnostic instrument.
Why a Maturity Model?
AI governance is not binary. It is not the case that organisations either govern AI responsibly or they do not. Governance capability exists on a spectrum, and most organisations sit somewhere in the middle — with some dimensions of AI governance well-developed and others nascent or absent. A maturity model provides the conceptual vocabulary and diagnostic structure to make that spectrum visible.
Maturity models have a long and productive history in IT governance, information security, and process management. The Capability Maturity Model Integration (CMMI) has been used for decades to assess and improve software development processes. The CMMI for information security has helped organisations benchmark their security management maturity. ISO 42001 certification readiness maps naturally onto a maturity progression, making a maturity model a natural companion to the standard.
The Speeki AI Governance Maturity Model serves three purposes. First, as a diagnostic: organisations can use it to assess where they currently sit across six governance dimensions and identify specific gaps. Second, as a roadmap: the model provides a clear progression path, showing what capabilities an organisation needs to develop to advance from one level to the next. Third, as a benchmark: over time, as Speeki accumulates data from organisations at different stages of the certification journey, the model will support sector-level benchmarking that shows where organisations sit relative to their peers.
The Seven Governance Dimensions
The Speeki AI Governance Maturity Model assesses governance maturity across seven dimensions, each reflecting a critical aspect of responsible AI management under ISO 42001, NIST AI RMF, and the broader international framework.
Leadership and Policy — the extent to which top management is genuinely committed to responsible AI, has established an AI policy, and has integrated AI governance into organisational strategy and culture.
Risk Management — the rigour, completeness and currency of the organisation’s AI risk assessments, impact assessments, and risk treatment activities.
Accountability and Roles — the clarity of AI governance roles and responsibilities, including designated ownership of AI systems, oversight mechanisms, and escalation processes.
Human Oversight and Controls — the effectiveness of mechanisms ensuring that humans can monitor, review, override and intervene in AI-driven decisions at appropriate points in the AI lifecycle.
Transparency and Documentation — the completeness and accessibility of documentation describing AI systems, their intended use, their limitations, their training data, and the decisions they influence.
Monitoring and Improvement — the sophistication of processes for monitoring AI system performance, detecting adverse outcomes, auditing the AI management system, and driving continual improvement.
Ethical Performance Measurement — the degree to which the organisation defines, tracks, and acts on metrics that capture the ethical quality of AI outcomes — including fairness scores, bias drift, explainability ratings, and escalation pathway effectiveness.
The Five Maturity Levels
Each governance dimension is assessed against five levels, progressing from ad hoc and reactive governance to systematic, independently verified and continuously optimised governance. The levels are consistent across all six dimensions.
Level 1 | Ad Hoc |
|---|
AI governance activities are unplanned, reactive and undocumented. Decisions about AI development and deployment are made informally without consistent processes, risk assessment or oversight structures. |
Typical Characteristics:
|
ISO 42001 Certification Readiness: Not ready for ISO 42001 certification. Significant foundational work required across all dimensions. |
Level 2 | Developing |
|---|
Basic AI governance activities are underway but are inconsistent, incomplete, or siloed within individual teams or projects. Awareness of AI governance requirements is growing but implementation is patchy. |
Typical Characteristics:
|
ISO 42001 Certification Readiness: Early-stage AIMS work underway. Likely 6-12 months from Stage 1 audit readiness without targeted investment. |
Level 3 | Defined |
|---|
The organisation has established a documented AI management system covering all material AI systems in scope. Governance processes are defined, owned, and consistently applied, though not yet fully optimised. |
Typical Characteristics:
|
ISO 42001 Certification Readiness: Stage 1 audit ready. Stage 2 audit (implementation) likely achievable with evidence of AIMS operation over 3-6 months. |
Level 4 | Managed |
|---|
The AI management system is certified and operating effectively. Performance metrics are tracked, monitoring is systematic, and the management system is demonstrably improving over time. |
Typical Characteristics:
|
ISO 42001 Certification Readiness: Certified under ISO 42001. Surveillance audits demonstrating ongoing conformity and improvement. |
Level 5 | Optimising |
|---|
The organisation is at the leading edge of AI governance maturity. The AIMS is a source of competitive advantage and stakeholder trust. The organisation contributes to industry standards and benchmarks peer practices. |
Typical Characteristics:
|
ISO 42001 Certification Readiness: Exemplary ISO 42001 certification record. Recertification with distinction. Sector benchmark for AI governance maturity. |
The Maturity Model at a Glance
The following table provides a consolidated view of maturity across all six governance dimensions and five levels. Use this as a quick reference for initial self-assessment.
Dimension | L1 Ad Hoc | L2 Developing | L3 Defined | L4 Managed | L5 Optimising |
|---|---|---|---|---|---|
Leadership & Policy | No AI policy | Policy drafted, not approved | Policy approved & communicated | Policy embedded in operations | Policy drives industry leadership |
Risk Management | No formal AI risk assessment | Partial, inconsistent assessment | Systematic AIMS risk assessment | Certified, data-driven monitoring | Predictive, sector-benchmarked |
Accountability & Roles | No AI governance roles | Informal project-level roles | Formal roles, AIMS-aligned | Integrated with enterprise GRC | AI governance as board priority |
Human Oversight | No oversight mechanisms | Ad hoc for some systems | Defined for all in-scope systems | Monitored, metrics-driven | Adaptive, proactively updated |
Transparency & Docs | No AI system documentation | Partial docs for major systems | Complete docs per Annex A.8 | Accessible, version-controlled | Leading practice transparency |
Monitoring & Improvement | No monitoring in place | Reactive incident management | Internal audit & mgmt review | Continuous monitoring & KPIs | Predictive improvement culture |
Ethical Performance | No ethics metrics defined | Ad hoc fairness awareness | Ethics KPIs defined & tracked | Benchmarked ethics indicators | Ethics measurement industry-leading |
How to Use the Model for Self-Assessment
The Speeki AI Governance Maturity Model is designed to support structured self-assessment as a precursor to formal ISO 42001 certification engagement. The self-assessment process has four steps.
First, assemble a cross-functional assessment team that includes representatives from legal and compliance, technology, risk management, and business operations. AI governance maturity is not a purely technical question, and a cross-functional team will identify gaps that a single-function team would miss.
Second, assess each of the seven governance dimensions against the five-level descriptions. Be honest about current state — the purpose of the self-assessment is to identify gaps, not to demonstrate compliance. For each dimension, assign a current level (1-5) and a target level for the next 12 months.
Third, identify the specific gaps between current and target levels for each dimension. What capabilities, processes, or controls need to be developed? What evidence would demonstrate that those capabilities are in place?
Fourth, prioritise the gaps based on risk, regulatory obligation, and strategic importance. Not all gaps are equal. An organisation that is at Level 1 on Risk Management and Level 3 on Documentation should invest in closing the risk management gap first. Use the prioritised gap list as the basis for an AIMS implementation project plan.
Certification Readiness: What Level Do You Need?
ISO 42001 certification is designed to be achievable at any reasonable scale of AI activity. The standard does not specify a minimum maturity level for certification — instead, it requires that the management system be appropriate to the organisation’s context. In practice, however, organisations that attempt Stage 2 audit before achieving Level 3 across all six governance dimensions are unlikely to succeed.
The readiness indicators in the Level 3 (Defined) description represent the minimum viable foundation for certification: a documented AIMS, formal risk assessments, implemented controls, defined oversight mechanisms, and an operating internal audit programme. Level 4 (Managed) represents a well-established, certified, and improving AIMS. Level 5 (Optimising) represents leading-edge maturity that goes significantly beyond certification requirements.
The Speeki Pre-Assessment Speeki offers a formal Pre-Assessment service that applies the AI Governance Maturity Model to provide organisations with an independent view of their readiness before committing to the formal certification process. The Pre-Assessment produces a maturity scorecard across all seven dimensions, a gap analysis against ISO 42001 requirements, and a prioritised implementation roadmap. It is particularly valuable for organisations that have invested in AIMS development and want confidence in their readiness before incurring the cost and commitment of a formal Stage 1 audit. |
|---|
The Benchmark Ambition
The Speeki AI Governance Maturity Model is not only a diagnostic tool — it is the foundation of an ambition to provide meaningful sector-level benchmarking of AI governance maturity. As Speeki accumulates data from organisations across sectors and geographies through our certification and pre-assessment programmes, we will publish annual benchmarking reports that show where organisations in each sector typically sit on the maturity scale, what the most common gaps are, and which sectors are leading and lagging in AI governance maturity.
This benchmarking data will be valuable not only for individual organisations seeking to understand their relative position, but for policymakers, investors, and boards seeking to understand the state of AI governance across the economy. It will allow Speeki to move from providing individual certification to contributing to a genuine, evidence-based picture of AI governance progress at scale.
The benchmark will cover all seven dimensions of the maturity model, with sector-specific analysis for financial services, healthcare, critical infrastructure, professional services, technology, and the public sector. Participating organisations will receive their individual maturity scores alongside anonymised sector benchmarks, enabling direct peer comparison.
The Role of Speeki
Speeki’s AI governance services span the full maturity journey. For organisations at Levels 1 and 2, we offer maturity assessments, gap analysis services, and educational resources to help build the foundational understanding and capability needed to begin AIMS implementation. For organisations at Level 3, we offer pre-assessment services and Stage 1 audit preparation support. For organisations ready for formal certification, our Stage 1 and Stage 2 audits provide the independent assessment needed to issue ISO 42001 certification. For certified organisations at Levels 4 and 5, our surveillance programme and benchmarking services provide the ongoing discipline and external perspective to sustain and advance governance maturity.
Conclusion: Maturity as a Journey, Not a Destination
No organisation achieves Level 5 AI governance maturity overnight, and most do not need to. What matters is honest assessment of current state, clear-eyed prioritisation of gaps, and sustained investment in improvement. The Speeki AI Governance Maturity Model provides the diagnostic framework for that journey — and ISO 42001 certification marks the most significant milestone along it.
Organisations that engage seriously with the maturity model — that use it honestly, address the gaps it reveals, and pursue certification as a demonstration of genuine governance rather than a compliance exercise — will be the organisations that earn the stakeholder trust that responsible AI deployment requires. In an era where AI is reshaping every sector of the economy, that trust is increasingly the price of the licence to operate.
Speeki
Speeki is an ISO certification body specialising in AI management systems certification under ISO/IEC 42001:2023. We help organisations design, implement and certify AI governance programs that meet international standards and build stakeholder trust.
Visit speeki.com to learn more, or contact our team to discuss your AI governance journey.