ISO/IEC 42001
ISO 42001 readiness for mid-market organisations
What an ISO/IEC 42001 management system actually requires, and what it does not, for organisations under 500 staff. A pragmatic readiness path that does not require a dedicated AI governance team.
ISO/IEC 42001:2023 — the AI Management System standard — was published in late 2023 and has spent the eighteen months since being mis-explained in equal measure by AI vendors (who present it as a checkbox), certification bodies (who present it as a major undertaking), and consultancies (who present it as whatever the client will buy).
For a mid-market organisation considering 42001, the useful framing is the one ISO uses for its other management system standards: it is a structured way to govern a thing, scaled to the size of the thing being governed. If you have an ISO 27001 program, you already know how this works. If you do not, it is a bigger lift, but not the lift the marketing suggests.
This piece is what we tell mid-market clients (50–500 staff) when they ask: should we be looking at 42001, and what does the readiness path actually look like?
What 42001 actually requires
42001 follows the harmonised Annex SL structure shared by 27001, 9001, 14001, and the others. Same clauses, same logic:
- Clauses 4–10: the management system itself. Scope, leadership, planning, support, operation, performance evaluation, improvement.
- Annex A: the AI-specific controls (38 of them, organised into 9 control objectives).
- Annex B: implementation guidance for Annex A.
- Annex C: AI-specific objectives the management system can be designed to meet.
- Annex D: domain-specific application notes.
For a mid-market organisation, the substantive readiness work is in two places: the controls in Annex A, and the AI risk assessment process required by clause 6.1. Everything else is the management system scaffolding — important, but mostly proceduralising things you already do.
What 42001 does not require
Reading the standard against the marketing material is clarifying. It does not require:
- A dedicated AI ethics committee. It requires that AI risk and AI impact are governed. How you organise that is your decision.
- A formal model card for every system. It requires that you document the AI systems within scope sufficiently to govern them. For a mid-market organisation using a small number of vendor AI tools, this is a register, not a research artefact.
- An algorithmic impact assessment for every use case. It requires impact assessment proportionate to risk. Most off-the-shelf SaaS AI tools deployed for internal productivity will have a short, light-weight assessment.
- Explainability of all model outputs. It requires that explainability is considered as part of system design and operation, with the level of explainability matched to the use case.
- Continuous monitoring of model bias for all systems. It requires monitoring proportionate to the risk profile and the context of use.
The standard is risk-based throughout. A mid-market organisation deploying Microsoft 365 Copilot, a sanctioned ChatGPT Enterprise tenant, and a vendor-supplied document classifier does not face the same control burden as a fintech building proprietary credit decisioning models.
The readiness path for a mid-market organisation
The readiness path we run is six steps, typically over four to six months elapsed time, with roughly 30–50 days of dedicated effort distributed across the period.
Step 1 — Scope and AI inventory (2–3 weeks)
Define the scope of the management system. For most mid-market deployments, this is the AI systems used by the organisation in support of its business operations. Then build the inventory: every AI system in scope, who owns it, what it does, what data it touches, how it is governed today.
This inventory becomes the spine of the management system. Get it right.
Step 2 — AI risk and impact assessment (3–4 weeks)
For each system in the inventory, an assessment of risk and impact at the level the standard requires (clause 6.1.2 for risk, 6.1.4 for impact). The risk assessment uses the same logic as your information security risk assessment, with AI-specific threat sources from sources like OWASP Top 10 for LLMs and MITRE ATLAS. The impact assessment looks at affected stakeholders — your customers, your staff, third parties — and the consequences of system failure or misuse.
Most of the 42001 readiness substance is here. A poor assessment produces a paper management system. A good assessment produces real control selection.
Step 3 — Annex A control gap analysis (2 weeks)
The 38 controls in Annex A, mapped against current practice. Most organisations with a mature 27001 program will start at 40–50% coverage already, because controls around access, change management, supplier relationships, and incident response carry across. The gaps tend to be in the AI-specific objectives — especially A.6 (life cycle), A.7 (data for AI systems), and A.8 (information for interested parties).
Step 4 — Management system documentation (3–4 weeks)
The policies, procedures, and records the standard requires. For a mid-market organisation, this is a single AI Management System manual, an AI policy, an AI use case register, an AI risk register, an AI impact register, supplier requirements for AI providers, and an internal audit and management review program. Documentation should be light — the standard does not require War and Peace and certifiers do not reward it.
Step 5 — Operate (8–12 weeks)
The management system has to actually run for a defined period before certification. During this period: the registers are maintained, risk assessments are performed for new systems, supplier reviews happen on schedule, internal audit is performed, management review is conducted at least once. You need evidence — actual records, with actual dates — that the system is operating as designed.
This is the step where most under-invested readiness programs fail. There is no shortcut.
Step 6 — Certification audit (2 weeks elapsed for the audit, scheduled per the certifier’s calendar)
Stage 1 (documentation review) followed by Stage 2 (operational audit). Certifier-dependent. Common findings on Stage 2 are around the impact assessment process, the supplier monitoring evidence, and the internal audit independence.
Should a mid-market organisation pursue certification?
The honest answer: only if there is a commercial or regulatory reason to. The reasons that hold up:
- A material customer is asking for it. Increasingly common in B2B SaaS, financial services, and government supply chains. If the question is in your RFI responses, certification is justifiable.
- You are an AI vendor. If your customers are taking on risk by using your AI, certification is a material market signal.
- You are operating in a jurisdiction where alignment is becoming expected. EU AI Act compliance, for example, materially benefits from a 42001 management system, even where certification itself is not mandatory.
The reasons that do not hold up on their own:
- We want to demonstrate we take AI seriously. You can demonstrate that with a public AI policy and a public risk position. Certification is expensive theatre if there is no buyer for the certificate.
- We want to avoid AI risk. Certification does not avoid risk; the management system does. You can run the management system without certifying it.
For most mid-market organisations we work with, the right answer is implement against 42001, do not certify until there is a buyer for the certificate. This captures the substantive risk benefit without the audit fees.
If 42001 readiness is on your roadmap and you are uncertain how to scope it, the posture assessment covers steps 1–3 above (scope, inventory, risk and impact assessment, Annex A gap analysis) as a fixed-fee engagement.
Continue reading
Related pieces
AI · Risk
Evals are a risk control. Most AI deployments are missing them.
If you cannot quantify the failure rate of a deployed AI system, you cannot say what its residual risk is — and the regulators are starting to ask. Evals are the discipline that closes the gap, and they are simpler to start than the literature implies.
12 July 2025
AI · Authorisation
MCP and the new authorisation surface nobody is reviewing
Model Context Protocol turns every internal API into a tool an agent can call on a user's behalf. The authorisation model most teams ship with is naïve, and the audit log usually proves it.
29 April 2026
Digital employees
Digital employees, with the governance attached
Why most digital employee deployments fail their first audit, and what a governance-first build looks like — identity, data access, supervision, and the accountability question almost no-one is answering well.
22 April 2026