Welcome back to our meticulous, clause-by-clause walkthrough of ISO/IEC 42001:2023 — the international standard for AI management systems that you’ve been meaning to read and haven’t. Today we arrive at Clause 3: Terms and Definitions, which is the part of every ISO standard that most professionals skip, deeming it beneath their attention. This is, of course, exactly why they spend the rest of the implementation arguing about what words mean. Enjoy that. I’ll be here.
Clause 3 is shorter than you might expect, which is by design and also slightly misleading. Rather than providing an exhaustive glossary right there in the document, the standard does what ISO standards love to do: it outsources. Clause 3 states that the terms and definitions given in ISO/IEC 22989:2022 (AI concepts and terminology) and ISO/IEC 38507:2022 (governance implications of the use of AI by organizations) apply, along with the terms defined within 42001 itself. So: three standards, minimum, just to build your vocabulary. You’re welcome.
What the Standard Actually Defines
Let’s not pretend the referencing strategy is a scandal — it’s actually sensible. ISO/IEC 22989 is the canonical AI terminology standard for the entire ISO/IEC 42xxx series. It defines foundational concepts like what an AI system is, what machine learning means, what a training dataset is. Requiring 42001 to re-define all of those would be redundant and would create the delightful problem of two conflicting definitions of the same term within the same standards family. ISO would prefer you avoid that. So would your auditors.
What Clause 3 of ISO 42001 does introduce are terms specific to the management system context — terms that didn’t exist in prior standards because, frankly, prior standards didn’t contemplate the governance of AI at this level of specificity. The key terms to understand are as follows.
AI System
Borrowed from ISO/IEC 22989, an AI system is a machine-based system that, for a given set of objectives, produces outputs — predictions, recommendations, decisions, or content — that influence real or virtual environments. It can operate with varying levels of autonomy and may exhibit adaptive behaviour. This definition is deliberately technology-neutral: it captures your large language model, your image classifier, your recommendation engine, and your fraud detection algorithm. If it takes inputs, applies some form of learned or programmed logic, and produces consequential outputs, it’s an AI system for purposes of this standard. Broad, yes. Intentionally so.
AI Management System (AIMS)
The AI management system is the management system an organization uses to establish policy, objectives, and processes related to AI — and to achieve those objectives responsibly. If you’re familiar with ISO 27001 (information security) or ISO 9001 (quality), the structure is analogous. The AIMS is not the AI itself; it is the organizational apparatus surrounding the AI: the governance, the documentation, the risk management, the accountability structures. Think of it as everything except the model weights.
AI Provider and AI Subject
Here is where ISO 42001 introduces vocabulary that is genuinely new to the standards landscape. An AI provider is an organization that develops, deploys, or makes an AI system available — including making it available to another organization that then uses it. This means the standard explicitly acknowledges the supply chain dimension: you may be a provider, a deployer, or both, and the standard has different implications depending on your position.
But the term that will likely give compliance teams the most pause is AI subject: a natural person who is directly or indirectly affected by the output of an AI system. Note carefully what this is not: the AI subject is not necessarily a user. They may never have chosen to interact with the AI system at all. A person whose loan application is assessed by an automated model, a patient whose scan is triaged by an AI diagnostic tool, a job applicant screened by a resume-filtering algorithm — all of these individuals are AI subjects. They have no relationship with the system except that its outputs affect them. This is a conceptual acknowledgement that AI governance is not merely about the user experience; it is about third-party impact. That’s new. That matters.
Intended Use and Reasonably Foreseeable Use
The standard defines intended use as the use of an AI system according to the specifications, instructions, and information provided by the organization deploying it. Straightforward enough. More interesting is the concept of reasonably foreseeable use — uses that are not explicitly intended but that an organization could reasonably anticipate happening anyway. And, lurking behind both of these, is the concept of misuse: uses that are not intended and not sanctioned, but which the organization should still consider when assessing risks.
If you have ever watched a customer use a piece of software in a way that was neither anticipated nor desired, you already understand this intuitively. The standard formalizes it into your risk management process.
AI System Impact Assessment
Clause 3 also introduces the AI system impact assessment — defined as the process of assessing the potential impacts of an AI system on individuals, groups, or society more broadly. This is distinct from a purely technical risk assessment (which might focus on model accuracy, robustness, or adversarial inputs) and from a standard information security risk assessment. It is, in essence, a social impact assessment applied to AI. Annex B of the standard provides guidance on conducting these assessments, but the term itself is defined here in Clause 3, which tells you something about how central it is to the standard’s intent.
AI System Lifecycle
Finally, the AI system lifecycle captures the evolution of an AI system from conception through development, deployment, operation, and eventual retirement. This matters because several of the standard’s controls and requirements are lifecycle-aware: they apply at specific stages, or differently depending on where in the lifecycle an AI system sits. Understanding the lifecycle as a defined concept — rather than an informal notion — is essential for scoping your AIMS correctly.
What’s New and What’s Notable
ISO 42001 is the first published management system standard specifically designed for AI, so in a meaningful sense, everything in it is new. But Clause 3 contains a few genuinely notable moves.
The introduction of “AI subject” as a defined term is, I would argue, the most significant conceptual contribution in this clause. Most enterprise compliance frameworks think about affected parties in terms of customers and users — people who have some kind of relationship with the organization. Naming and defining a category of person who has no such relationship, but is nonetheless affected by the organization’s AI systems, is a real expansion of governance thinking. Organizations that have never considered this will need to.
The layered reference structure — incorporating ISO/IEC 22989 and ISO/IEC 38507 — also means that this standard was designed as part of a family, not as a standalone document. If you purchase only ISO 42001 and attempt to implement it without access to 22989, you will encounter undefined terms in the very first normative clause. This is either good standards architecture or a mild revenue generation strategy for ISO. Possibly both.
Finally, it is worth noting what Clause 3 does not define: “artificial intelligence” itself. The standard relies on the definition in ISO/IEC 22989, which is deliberately broad. This means that any organization wrestling with the question “is this thing we built actually an AI system?” should consult 22989 rather than expecting 42001 to resolve the ambiguity. It mostly won’t.
What This Means in Practice
If you are beginning an ISO 42001 implementation, Clause 3 is your vocabulary list. Before you scope your AIMS, before you conduct your AI system impact assessment, before you write your AI policy — you need to understand what the standard means by these terms, which may differ from how your engineering team, your legal department, and your marketing colleagues use the same words. Misaligned terminology at the start of an implementation is one of the most reliable ways to produce misaligned documentation at the end. It is also a very effective way to ensure a painful audit.
Practically speaking: read Clause 3. Then read the relevant sections of ISO/IEC 22989. Build an internal glossary that maps these defined terms to how your organization will use them. Make it a living document. You will thank yourself later, or at least you will thank yourself less bitterly than if you hadn’t done it.
Up Next
Having now established that we know what an AI system is, who an AI subject is, and that we need three standards to achieve basic literacy in this framework, we can proceed to the substance. Clause 4 — Context of the Organization — is where the standard begins to make real demands: understanding your organization’s internal and external context, identifying interested parties, determining scope, and establishing the foundations of your AI management system. It is, to use the technical term, where things get complicated. I’ll see you there.
Work with Red Hen Admin
Ready to put this into practice?
Whether you need an independent quality system audit or hands-on QMS consulting, Red Hen Admin can help — remote and on-site in Southern California.