Welcome back. You’ve made it to Clause 4, which means you have survived the Scope (Clause 1), the Normative References — which were essentially a reading list with licensing fees (Clause 2) — and the Terms and Definitions, which introduced you to the concept that “AI system” requires a formal definition apparently because some organizations needed to be told (Clause 3). You are now, to use a technical term, ready to do some actual work.
Clause 4 is called “Context of the Organization,” and before you let your eyes glaze over — it’s a phrase that appears in every ISO management system standard going back to the 2015 revisions of ISO 9001 and ISO 14001 — I would encourage you to pay attention this time. Because while in most management system standards “context” means something like “know what industry you’re in,” in ISO 42001 it means something substantially more interesting, and considerably harder to hand-wave.
4.1 — Understanding the Organization and Its Context
The requirement, at its surface, is familiar: the organization shall determine external and internal issues relevant to its purpose and strategic direction that affect its ability to achieve the intended outcomes of its AI management system. Standard ISO boilerplate. You’ve seen it before. You probably have a SWOT diagram from a previous certification effort gathering digital dust somewhere on your SharePoint.
Here, however, the standard is genuinely asking you to think about something more specific: the nature of your AI systems and how your operational context shapes their risks and impacts. External context includes the regulatory environment (and yes, that is changing, rapidly, in ways that would make any thoughtful person mildly anxious), the technological landscape, societal expectations, and — notably — the communities that might be affected by how you deploy AI. Internal context includes your organizational culture, your existing governance structures, your data practices, and your capacity to manage something as amorphous as “algorithmic accountability.”
The standard is nudging you to ask: do you actually understand what your AI systems do, to whom, and in what environment? If your honest answer is “sort of,” you are in good company, but you are also now required to do better.
4.2 — Understanding the Needs and Expectations of Interested Parties
This subclause is where ISO 42001 quietly distinguishes itself from its siblings, and where the standard’s drafters deserve a modest amount of credit for intellectual honesty.
In ISO 9001, “interested parties” largely means customers, regulators, and the occasional supplier. In ISO 27001, you’re mostly thinking about people who might want to breach your systems or hold you to data protection obligations. Useful, if somewhat contained.
ISO 42001 casts the net substantially wider. You are expected to identify parties whose needs and expectations are relevant to your AIMS — and this explicitly includes people affected by the outputs of your AI systems who are not your customers, not your employees, and possibly not even aware that your AI system exists. The job applicant filtered out by your automated screening tool. The loan applicant scored by your credit model. The pedestrian in a city that uses your computer vision platform.
This is not merely a philosophical expansion. It has practical implications for how you define your scope, how you design your AI impact assessments, and how you demonstrate that your management system is actually managing something beyond your own operational convenience. ISO 42001 is asking you to acknowledge that AI governance is not just an internal housekeeping exercise — it has externalities, and those externalities are your problem now.
You will also want to capture requirements from regulators (the EU AI Act being the rather large elephant in the room), industry bodies, contractual obligations, and your own internal policies. Keep a register. Update it. This is not optional.
4.3 — Determining the Scope of the AI Management System
Having done all that contextual thinking, you must now make a decision: what is actually in scope for your AIMS?
This sounds simple. It is not. ISO 42001 requires you to consider the external and internal issues from 4.1, the requirements of interested parties from 4.2, and — critically — which specific AI systems are within scope, along with your organization’s role with respect to those systems.
That last point is worth dwelling on. ISO 42001 introduces a distinction between AI system providers (organizations that develop or make AI systems available for others to use) and AI system deployers (organizations that put AI systems into operation for their own purposes or their customers’). You may be one, the other, or both simultaneously. Your scope must be honest about this.
Why does this matter? Because the obligations, risk profiles, and control requirements differ depending on your role. A company building an AI system for sale has different governance responsibilities than a company integrating a third-party AI tool into its HR processes. Conflating the two, or conveniently pretending you’re only one when you’re actually both, will produce a scope document that is technically tidy and practically useless.
The scope must be documented. It must be available as documented information. And it must reflect reality — which is perhaps the most demanding requirement of all.
4.4 — AI Management System
Subclause 4.4 is the shortest in this clause and, in a sense, the most consequential. It states, in the magisterially plain language ISO reserves for load-bearing requirements, that the organization shall establish, implement, maintain, and continually improve an AI management system in accordance with the requirements of this document.
There it is. That sentence is the foundation upon which your entire AIMS rests. Everything else in the standard — the risk assessments, the policies, the objectives, the audits — exists to give that commitment substance. Clause 4.4 is ISO’s way of saying: you have decided to do this properly. Now prove it.
What’s New: How Clause 4 Differs From What You’re Used To
If you’re arriving from ISO 27001 or ISO 9001, Clause 4 will feel familiar in structure and somewhat vertiginous in scope. A few things genuinely stand out.
The AI system inventory is not optional. Unlike most management system standards, where the context clause is primarily about understanding your operating environment in the abstract, ISO 42001 ties context directly to specific AI systems. You need to know what AI you have, what it does, and where it fits in the provider/deployer taxonomy. This is a cataloging exercise that most organizations have not completed — and that many, if pressed, would prefer to avoid.
“Affected communities” is a real requirement. The breadth of “interested parties” in ISO 42001 goes beyond any previous ISO management system standard. The standard explicitly contemplates that AI impacts extend beyond direct stakeholders. This is not rhetorical — it shapes how you conduct impact assessments and what your AIMS objectives need to address.
There is no previous version to migrate from. ISO 42001 is a first-edition standard. There is no ISO 42001:2019 gap assessment to run. Every organization starting this journey is starting from zero, which means the context clause isn’t about updating your scope statement — it’s about building an entirely new one, from first principles, in an area most organizations do not have mature governance frameworks for. Enjoy.
A Closing Thought
Clause 4 is, in my considered view, where organizations will most frequently underperform — not because it is technically difficult, but because it requires a degree of organizational self-awareness that is genuinely uncomfortable. Admitting that your AI systems affect people you’ve never thought about, that your scope might be larger than you’d like, that your context includes regulatory obligations you haven’t fully mapped — these are not comfortable conclusions to document and sign off on.
And yet. That discomfort is, roughly speaking, the point. An AI management system that has scoped away its hard problems is a filing exercise, not a governance framework. ISO 42001 is at least asking the right questions in Clause 4. What you do with them is, as always, left as an exercise for the reader.
Next time: Clause 5, Leadership. Which is to say, we turn our attention to whether the people at the top of your organization have any meaningful relationship with the AI systems you’ve just finished cataloguing. The answer, historically, has been mixed. We’ll see if ISO can improve the odds.
Work with Red Hen Admin
Ready to put this into practice?
Whether you need an independent quality system audit or hands-on QMS consulting, Red Hen Admin can help — remote and on-site in Southern California.