Yes, This Applies to You

Welcome to the first installment of what I have grandly titled The ISO 42001 Annotated Companion for People Who Will Probably Skim It. Today we cover Clause 1: Scope. This is, as the name suggests, the part that tells you what the standard is for, who it applies to, and — most critically — whether you can politely inform your compliance officer that it doesn’t apply to your organisation.

Spoiler: it probably applies to your organisation.

What Is ISO/IEC 42001:2023, Briefly?

ISO/IEC 42001:2023 is the world’s first international standard for AI Management Systems, published in December 2023 by the International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC). If you are familiar with ISO 9001 (quality management) or ISO 27001 (information security), the basic architecture will be recognisable: it’s a Plan-Do-Check-Act framework for systematically managing a domain of organisational risk and activity. In this case, that domain is artificial intelligence.

Now. Clause 1.

What Clause 1 Says (In Its Own Words, More or Less)

Clause 1 establishes the purpose and applicability of the standard. It states that ISO 42001 specifies requirements for establishing, implementing, maintaining, and continually improving an AI Management System (AIMS) within the context of an organisation. The objective: responsible development and use of AI systems.

The clause applies the standard to any organisation — regardless of type, size, or sector — that provides or uses AI systems. Full stop. No exceptions for being small, for being in an industry that hasn’t caught up yet, or for having a very sincere feeling that your AI use is minimal and harmless.

The standard distinguishes between two primary organisational roles:

  • Provider (or Developer): An organisation that develops AI systems — training models, building products, deploying systems for others to use.
  • Deployer (or User): An organisation that uses AI systems in its own operations or integrates them into products and services offered to others.

An organisation may occupy both roles simultaneously — which, if you think about it for a moment, is exactly the kind of situation most mid-to-large organisations find themselves in today. You’re using someone’s cloud AI service (deployer) while also having a team quietly building your own internal AI tools (provider). Congratulations: you have compliance obligations on both fronts.

What It Means in Practice

Clause 1 is primarily definitional, so its practical weight comes from what it forces you to acknowledge.

First: the standard has no industry carve-outs. This is not a regulation aimed at autonomous vehicles, or medical AI, or financial risk models specifically. It is a horizontal standard — designed to sit across all sectors. Which means that “we’re just a manufacturing company, AI doesn’t really apply to us” is, at best, a conversation you should have had before deploying that predictive maintenance algorithm.

Second: the scope covers AI systems broadly. The standard doesn’t restrict itself to narrow AI, generative AI, or any particular technology. If your organisation provides or uses AI systems — and the definition of “AI system” (addressed more thoroughly in Clause 3, which we’ll get to in due course) is deliberately broad — you are in scope.

Third: the standard is compatible with other management system standards. Clause 1 explicitly notes that ISO 42001 can be integrated with other ISO management systems. If you’re already running ISO 27001 or ISO 9001, you are not starting from scratch. You are, however, starting from something — which is not nothing. The Annex SL high-level structure means the framework will be familiar, even if the subject matter is decidedly less familiar than a password rotation policy.

Fourth: the standard is suitable for conformity assessment. That means third-party certification is possible. Auditors can come to your organisation, examine your AIMS, and issue a certificate of conformity to ISO 42001. Whether your customers, regulators, or insurers will start requiring that certificate is a question currently being answered in real time, across multiple industries, in multiple jurisdictions. Watch that space carefully, because it is moving.

The Affected-Parties Dimension

One element of Clause 1 that warrants particular attention: the scope explicitly frames the standard as being concerned not just with organisational efficiency or risk management, but with the responsible treatment of those affected by AI systems. This is not decorative language.

It signals, early and unambiguously, that an AIMS is not simply an internal governance exercise. The standard expects organisations to consider the humans on the receiving end of automated decisions — employees, customers, third parties, society at large. This framing sets up requirements you’ll encounter at length in Clauses 6 and 8, around impact assessments and AI-specific risk management.

If you were hoping this was purely a documentation exercise, Clause 1 is your first indication that it is not.

What’s New — And What Was Genuinely Surprising

Since ISO 42001 is a first-edition standard, there was no predecessor to revise. There are no “changes from the 2017 version” to catalogue. This is genuinely new territory, and Clause 1 reflects a deliberate choice about how to approach it.

What practitioners who tracked the drafts will note: the final scope clause is notably broader than some earlier drafts suggested. Initial working group discussions explored whether the standard should focus on AI providers only — the organisations actually building systems. The decision to explicitly include deployers was consequential. It means organisations that never write a line of model code are nonetheless within scope, simply by virtue of using AI. Given that this describes, conservatively, most large organisations operating today, the practical universe of applicable entities is enormous.

Compared to the EU AI Act — which is risk-tiered and applies different obligations to different classes of AI application — ISO 42001’s scope is refreshingly (or alarmingly, depending on your disposition) flat. Everyone who touches AI is in. The risk-tiering happens within the standard, in Clauses 6 and 8, not at the scoping stage. You cannot argue your way out of applicability; you can only argue about what level of effort your particular AI context requires.

For organisations already holding ISO 27001 certificates: the Annex SL compatibility is genuine, and the structural overlap is substantial. The AI-specific requirements are layered on top of a familiar skeleton. This is either reassuring or slightly unnerving, depending on how you feel about the state of your existing management system.

A Brief Note on Self-Declaration vs. Certification

Clause 1 notes that ISO 42001 can be used for self-assessment, first-party declarations, second-party assessments, or third-party certification. None of these is mandated by the standard itself. External certification is optional — unless, of course, your contracts, regulators, or insurers make it otherwise. Given current trajectories in AI regulation globally, I would not bet heavily on “optional” remaining the operative word indefinitely.

Closing Thought

Clause 1 is not where the hard work lives. It is a throat-clearing exercise: a document saying, with ISO-standard politeness, that this standard exists, this is what it covers, and yes — yes — it applies to your organisation. The important word in “AI Management System” is not AI. It is management. This is a standard about governance, process, and accountability. The AI part is simply the domain those things are now required to address.

Next up: Clause 2 — Normative References. This is the section that politely informs you there are several other documents you are also expected to own and understand. It is, I promise, exactly as thrilling as it sounds. I will see you there.

Leave a Comment

Scroll to Top