Welcome back, dear reader, to our stately procession through ISO/IEC 42001:2023. We have thus far surveyed scope, purchased several additional standards, agreed on terms, discovered where we work, asked management to lead, planned, and resourced. It has been, in every respect, a triumph of paperwork.
Which brings us — inevitably, and with a faint pang of dread — to Clause 8. The one about doing the thing.
Yes. At some point, a standard about AI management must glance up from its risk registers and notice that there is, in fact, an AI system somewhere in the building. Clause 8, titled with the bracing minimalism of all good ISO clauses, is called Operation. It is where the carefully formatted documents you produced in Clauses 4 through 7 are, at long last, asked to touch reality. Reality, it should be noted, does not reciprocate their enthusiasm for bullet points.
What This Clause Actually Says
Clause 8 contains four subclauses. (You may have read elsewhere that it contains six. It does not. I cannot account for this rumour and will not be drawn into a conversation about it.) Each subclause exists to convert the governance theatre of earlier clauses into evidence that something is, in the most literal sense, being done.
8.1 — Operational Planning and Control
This is the subclause that asks you, ever so politely, to actually run your AI management system in a controlled manner. You must plan, implement, and control the processes needed to meet the requirements of your AIMS, and to implement the actions determined in Clause 6 — which, you will recall with a mild cringe, included your risk treatment plan and your objectives.
Specifically, you are required to establish criteria for those processes (think performance thresholds, bias tolerances, acceptance tests, drift limits), control the processes in accordance with those criteria, and maintain documented information sufficient to demonstrate that the processes have been carried out as planned. You must also control planned changes — because, as anyone who has deployed a model into production at 4:45pm on a Friday will attest, unplanned changes are seldom an improvement.
And — this bit warrants underlining — externally provided processes, products, and services relevant to the AIMS must be controlled. That third-party model behind your cheerful chatbot does not get a pass simply because it arrived via API. If it materially contributes to the AI system, it falls inside your management system, whether you like it or not. Spoiler: you will not.
8.2 — AI Risk Assessment
You may remember, fondly, that in Clause 6.1.2 you established a process for identifying and analysing AI risks. Clause 8.2 is the part where you perform that process. At planned intervals. And when significant changes occur or are proposed. And you retain documented information of the results.
In other words: the risk assessment is not a one-time pilgrimage you completed during certification prep. It is a recurring activity. Much like brushing one’s teeth, although considerably less popular.
The objective is to have, at any given moment, a current picture of the AI risks your organisation faces — technical risks, governance risks, risks to the individuals and groups affected by the AI, and yes, risks to the organisation itself. All of this reassessed whenever the landscape shifts, which, given the industry we find ourselves in, is roughly every Tuesday.
8.3 — AI Risk Treatment
Having re-identified the risks in 8.2, you are now required to implement the AI risk treatment plan you lovingly prepared in Clause 6.1.3. Controls must be applied. Their application must be documented. And — this part tends to come as a surprise — you must retain documented information of the results of the treatment.
Not merely that you implemented the control. That you implemented it, and that it did what it was supposed to do. A distinction which, you will discover over time, separates organisations with functioning management systems from organisations with impressive binders.
8.4 — AI System Impact Assessment
And here we come to the subclause that makes Clause 8 genuinely novel. Also the subclause most likely to induce a long, contemplative silence in the meeting where it is first read aloud.
An AI system impact assessment — to be performed, documented, and retained — evaluates the potential consequences of the AI system on individuals, groups of individuals, and societies. Not the organisation. Not the shareholders. The actual humans at the other end of the model’s output. Fairness. Human rights. Psychological well-being. Environmental effects, where relevant. The assessment is to be conducted at planned intervals, and again when significant changes occur.
This is not a cybersecurity risk assessment with the words swapped around. It is a fundamentally different exercise, asking a fundamentally different question. “What could this system do to people?” is, I’ll grant you, a less comfortable conversation than “What could go wrong for us?” It is also considerably more important, and the standard quietly insists on both.
What It Means in Practice
Clause 8 is where the AIMS becomes an operating system rather than a shelf ornament. To satisfy it, an organisation typically needs: defined acceptance criteria for models (performance, robustness, fairness, and so on); change control procedures that apply to datasets, training pipelines, and deployed systems; supplier management arrangements that extend to AI providers; a recurring risk assessment cadence; evidence that risk treatments are applied and effective; and a process for assessing impacts on people and society, with outputs that actually influence design decisions.
In other words: pipelines, procedures, evidence, and meetings at which someone sincerely asks whether the model might cause harm and is not immediately escorted from the room.
What Changed, What’s New, and What You Will Have to Build From Scratch
ISO 42001 is a first-edition standard, so the comparison point is not “an earlier version of itself” — it is “other management system standards you’ve already suffered through.” Against that backdrop, Clause 8 is both comfortingly familiar and pointedly unusual.
Familiar, because the structure mirrors ISO 27001 Clause 8 almost exactly: operational planning and control, a recurring risk assessment, and the implementation of a risk treatment plan. If you have a working ISO 27001 operation function, the mechanics of 8.1 through 8.3 will feel like putting on a coat you already own.
Unusual, because 8.4 has no direct equivalent in any other management system standard in current circulation. ISO 27001 protects the organisation’s information. ISO 9001 protects the organisation’s product quality. ISO 42001, alone among its siblings, requires the organisation to formally assess harm to people who are not the organisation. This is a meaningful departure, and it means most organisations will have to build the impact assessment function from scratch — methodology, criteria, documentation templates, trained personnel, the works. You cannot simply re-skin your existing risk assessment and claim victory. The auditor will notice. The auditor always notices.
A second novelty: the scope of “outsourced” in 8.1 now sweeps in third-party AI services, including foundation models accessed via API. Your governance obligations do not stop at your firewall. This will require contractual language, supplier questionnaires, and, I suspect, a number of awkward conversations with vendors who believed their terms of service were sufficient. They were not.
Art Meta’s Editorial Note
Clause 8 is, for all its bureaucratic scaffolding, the first clause in ISO 42001 that seriously confronts the question the rest of the standard has been tactfully circling: what if the AI system actually does something. Subclause 8.4 is the most quietly ambitious sentence in the entire standard. It asks the organisation to consider its downstream effect on human beings and then to write that consideration down, sign it, and revisit it. Which is, I admit, either a deeply sensible idea or the most earnest thing ever to appear in an ISO document, depending on one’s mood.
Up next in our series: Clause 9, Performance Evaluation — which is exactly what it sounds like, and no less exciting. Monitoring, measurement, internal audit, and the annual ritual known as management review, in which the documents produced throughout Clauses 4 to 8 are presented to the same leaders who commissioned them. Bring snacks.
Until then — do try to operate the system. The standard really is quite insistent about it.