Ah, Clause 2. The normative references section. In virtually every ISO management system standard ever published, this is the page that goes by in a blur — a brief, businesslike list of other documents the standard depends upon, delivered without ceremony and read without joy. You will be pleased to know that ISO 42001 upholds this proud tradition.
You will be less pleased to know that the documents listed here are not free.
What Clause 2 Actually Says
Clause 2 of ISO/IEC 42001:2023 identifies two normative references — documents that are, in the standard’s own language, “indispensable for the application of this document.” That word, indispensable, is doing significant work. It means these are not background reading. They are not suggestions. They are part of the framework, and to the extent the standard references them, you are expected to apply them.
The two documents are:
- ISO/IEC 22989:2022 — Information technology — Artificial intelligence — Concepts and terminology
- ISO/IEC 23894:2023 — Information technology — Artificial intelligence — Guidance on risk management
That is the entirety of Clause 2. Two bullet points. If you are hoping for more, I regret to inform you that Clause 3 is where the definitions live, and that is a different post.
What ISO/IEC 22989 Is and Why It Matters
ISO/IEC 22989:2022 is the foundational terminology standard for AI — essentially the agreed international vocabulary for talking about artificial intelligence in a governance context. It defines terms like “AI system,” “machine learning,” “training data,” “AI model,” and dozens of others that appear throughout ISO 42001.
This is not merely administrative tidiness. In standards work, definitions are load-bearing walls. When ISO 42001 obliges you to manage “AI systems,” the scope of that obligation depends entirely on what an “AI system” means — and that meaning is formally established in ISO/IEC 22989. If you find yourself wondering whether a particular piece of software in your organisation qualifies as an AI system for compliance purposes, ISO/IEC 22989 is where you go to find out.
In practice, most of the relevant definitions from ISO/IEC 22989 are reproduced — with citations — in Clause 3 of ISO 42001 itself, which means you can often navigate without constantly cross-referencing the parent document. However, “often” and “always” are not the same word, and auditors are occasionally pedantic about such distinctions. Owning the source document is the safer posture.
What ISO/IEC 23894 Is and Why It Matters
ISO/IEC 23894:2023 is the risk management guidance standard for AI. While ISO 42001 establishes what an organisation’s AI risk management process must achieve, ISO/IEC 23894 provides the detailed methodology for how to actually do it.
This includes frameworks for identifying AI-specific risks (bias, opacity, safety failures, data quality issues, misuse scenarios), guidance on risk assessment approaches suited to AI’s particular characteristics, and direction on how AI risk management integrates with broader organisational risk management processes — including alignment with ISO 31000, the general risk management standard.
The relationship between ISO 42001 and ISO/IEC 23894 is best understood as: ISO 42001 tells you that you must have a systematic approach to AI risk management; ISO/IEC 23894 tells you what a systematic approach looks like. You are expected to use both. Organisations implementing an AIMS without consulting ISO/IEC 23894 are, to put it charitably, assembling furniture without reading the instructions — and then submitting it for audit.
What This Means in Practice
Here is the practical upshot: implementing ISO 42001 is not a matter of buying one document. It is, at minimum, a matter of buying three — ISO 42001 itself, ISO/IEC 22989, and ISO/IEC 23894 — and potentially more as you work through Annex A and discover further referenced standards. ISO standards are sold individually, typically at prices ranging from a few dozen to a few hundred dollars or euros apiece, depending on the document’s length and your organisation’s country of origin.
I raise this not to be discouraging, but because “we have ISO 42001, that should be sufficient” is a statement that will eventually meet an uncomfortable conversation with either an auditor or with Clause 6, whichever arrives first.
The more operationally significant implication is this: your AI risk management methodology must be grounded in ISO/IEC 23894. This is not a vague aspiration; the normative reference makes it a structural requirement. Organisations that have built AI risk frameworks based on other methodologies — NIST AI RMF, for instance, or internal bespoke processes — will need to assess the alignment between those frameworks and ISO/IEC 23894’s approach. In many cases, the gap will be narrower than expected. In some cases, it will require deliberate bridging work.
What’s New — And What This Signals About the Standard
In most ISO management system standards — ISO 9001, ISO 27001, ISO 14001 — the normative references clause is either empty (the standard is self-contained) or references only the high-level structure terms-and-definitions document. ISO 9001:2015, for example, has no normative references at all. ISO 27001:2022 references only ISO/IEC 27000 for definitions.
ISO 42001 takes a notably different approach by including a substantive risk management standard as a normative reference. This signals something important about the committee’s intent: they did not want to re-invent AI risk management guidance from scratch within the standard itself, and they did not want organisations to treat AI risk as merely a subcategory of information security risk (ISO 27001’s domain) or operational risk (ISO 9001’s territory). AI risk has its own dedicated methodological home in the ISO ecosystem, and ISO 42001 points you there explicitly.
This is, frankly, a sensible decision. It means the risk management guidance can evolve — ISO/IEC 23894 can be updated as the AI risk landscape develops — without requiring a full revision of ISO 42001. It also means that organisations building their AIMS are not starting from zero on risk methodology; there is a documented, internationally agreed framework waiting for them. Whether they read it is, of course, another matter.
For organisations coming from ISO 27001: you are likely already comfortable with a normative-reference ecosystem approach. For organisations new to ISO management systems: welcome to the experience of discovering that compliance involves an expanding library of source documents. I suggest a comfortable chair.
Closing Thought
Clause 2 is two lines long in the standard itself. It has taken somewhat more than two lines to explain why those two lines matter — which is, I suppose, a reasonable summary of what this entire series is for.
The practical action items from Clause 2 are modest but non-trivial: obtain copies of both ISO/IEC 22989:2022 and ISO/IEC 23894:2023; familiarise yourself with them; and ensure your AI risk management methodology can demonstrate alignment with ISO/IEC 23894’s framework. That last item will be relevant again in Clause 6.
Next: Clause 3 — Terms and Definitions. This is the section where the standard defines, with considerable precision, the words it has been using throughout Clauses 1 and 2. I recognise the logical sequencing concern. The standard’s authors, presumably, do not. We press on regardless.