Leadership — Top Management Must Lead (Apparently This Needs Saying)

Welcome to Clause 5. If you’ve been following this series, you’ve survived Scope, Normative References, Terms and Definitions, and the sprawling context-mapping exercise of Clause 4. You are, in other words, a person of admirable stamina. And your reward — your prize — is a clause about leadership. Specifically, a clause requiring that the people at the top of your organization demonstrate that they are, in fact, leading.

I’ll give you a moment with that.

In all seriousness: Clause 5 is where ISO 42001 follows the well-worn path of every modern management system standard and asks top management to formally commit to the thing they presumably already believe in, or at least will believe in once Legal has explained the certification implications. It’s three subclauses. They’re not surprising. They are, however, required — and one of them is more interesting than it looks.

Subclause 5.1: Leadership and Commitment

Subclause 5.1 contains the phrase that has appeared in so many ISO standards it has essentially become wallpaper: “top management shall demonstrate leadership and commitment.” If you’ve implemented ISO 9001 or ISO 27001 or ISO 14001 or any of their cousins, you have read this sentence before. Possibly many times. Possibly in your sleep.

What it actually requires, under ISO 42001, is a list of concrete behaviours from senior leadership:

  • Ensuring the AI policy and AI objectives are established and aligned with the organization’s strategic direction
  • Ensuring that AIMS requirements are integrated into the organization’s business processes — not bolted on afterward like a compliance sticker
  • Ensuring that the resources needed for the AIMS are available
  • Communicating the importance of effective AI management, including conformance to requirements
  • Ensuring the AIMS achieves its intended outcomes
  • Directing and supporting staff to contribute to the AIMS’s effectiveness
  • Promoting continual improvement
  • Supporting other relevant managers to demonstrate their leadership in their areas

In practice, most of this translates to: get a senior person visibly involved, give them authority, and make sure there’s budget. ISO is not naïve about how organizations work. The standard knows perfectly well that if the CISO or the Chief AI Officer has no resources and no access to the C-suite, no amount of policy documentation will make the AI management system function. Clause 5.1 is ISO’s attempt to make leadership accountability explicit rather than assumed.

Is it sufficient? That depends entirely on whether your top management treats this as a genuine commitment or an annual checkbox. ISO 42001 cannot solve culture. It can only require that somebody at the top signs their name to a framework and is then, theoretically, accountable for whether it works.

Subclause 5.2: AI Policy

This is where things get slightly more interesting, or at least slightly more specific. Subclause 5.2 requires top management to establish an AI policy — a documented, communicated, and maintained statement of the organization’s commitments with respect to its AI systems.

The policy must:

  • Be appropriate to the organization’s purpose and context (the context you mapped in Clause 4, which you will recall was not optional)
  • Provide a framework for setting AI objectives
  • Include a commitment to satisfying applicable requirements
  • Include a commitment to continual improvement of the AIMS
  • Be available as documented information (i.e., written down, versioned, accessible)
  • Be communicated within the organization
  • Be available to interested parties as appropriate
  • Be reviewed at appropriate intervals

The first seven items on that list are entirely standard Annex SL boilerplate. If you have an information security policy under ISO 27001, you have already written most of an AI policy — structurally speaking. Swap a few nouns, add your AI-specific commitments, get a senior signature, done.

What’s worth pausing on is that last point: reviewed at appropriate intervals. In the context of AI, “appropriate” is doing a lot of work. AI systems change. The regulatory environment around AI is changing rapidly — the EU AI Act, various national frameworks, sector-specific requirements — and an AI policy written in 2024 may be materially incomplete by 2026. The standard wisely declines to specify a review cadence (annual? quarterly? whenever a large language model makes headlines?), leaving that judgment to the organization. This is either refreshing flexibility or unhelpful vagueness, depending on your temperament.

The available to interested parties provision is also worth noting. In ISO 27001, the information security policy is typically an internal document, shared externally only when required by contracts or audits. For AI, the concept of “interested parties” established back in Clause 4 includes potentially affected communities — people who interact with or are subject to your AI systems. The standard doesn’t mandate that you publish your AI policy on your website, but the expectation of external transparency is noticeably broader than in most management system standards. Something to discuss with your communications team before your auditor asks about it.

Subclause 5.3: Organizational Roles, Responsibilities, and Authorities

Subclause 5.3 requires top management to ensure that roles and responsibilities for the AIMS are assigned, documented, and communicated. Two specific assignments are non-negotiable:

  • Someone must be responsible for ensuring the AIMS conforms to ISO 42001’s requirements
  • Someone must be responsible for reporting on AIMS performance to top management

These are the same two assignments required by every Annex SL standard. Your ISO 9001 has a quality management representative. Your ISO 27001 has an ISMS lead. ISO 42001 needs an AI management system lead. The novelty here is limited.

What is genuinely new is the implicit scope of “roles related to AI systems.” Unlike ISO 27001, where the relevant roles are primarily security and IT, ISO 42001 touches a much wider cast of characters: data scientists, product managers, AI ethicists, procurement teams buying third-party models, legal counsel reviewing AI outputs, customer-facing staff deploying AI tools. Clause 5.3 doesn’t enumerate all of these, but it does require that responsibilities and authorities be defined and communicated across whoever is involved in the AIMS. For most organizations, this means building a RACI matrix for AI governance that didn’t previously exist — and discovering, in the process, that everyone assumed someone else was responsible for the things nobody was doing.

What’s New — Or What Looks Familiar But Isn’t Quite

Clause 5, taken as a whole, is Annex SL with a coat of AI-specific paint. If your organization has implemented any modern ISO management system standard, the structural requirements here will feel routine. The AI policy looks like every other policy. The leadership commitment language is identical. The roles-and-responsibilities subclause follows the same pattern it always has.

What makes ISO 42001’s Clause 5 different in practice — not on paper — is context. A quality policy lives in a relatively stable environment. An information security policy evolves with the threat landscape. An AI policy must account for a technology category that is changing faster than the standards that govern it, with ethical and societal implications that extend well beyond system reliability or data confidentiality. The organizations that treat Clause 5 as a documentation exercise will produce compliant paperwork. The organizations that treat it as a genuine governance commitment will produce something more durable.

ISO cannot make that choice for you. It can only require that someone, at the top of your organization, is formally accountable for which path you take.

What You Actually Need to Do

Concretely: you need a written AI policy, signed off by top management, that makes your organization’s commitments explicit. You need someone assigned — by name or role — to own the AIMS and report on its performance. You need your leadership team to be able to articulate, under audit conditions, how they are actively supporting the AI management system rather than simply being aware it exists.

None of this is complicated. Much of it, if you’ve already invested in ISO 27001 or ISO 9001, you’ve effectively done before. The gap, for most organizations, is the AI-specific content: what does responsible AI mean for your organization, in your context, given the AI systems you actually operate? That’s the question the AI policy needs to answer — and it’s one that no amount of Annex SL boilerplate can answer for you.

Next up: Clause 6 — Planning. Which is where ISO 42001 asks you to think systematically about risks and opportunities before anything has gone wrong. A sensible idea, executed with the full weight of ISO bureaucratic precision. We’ll discuss.


Work with Red Hen Admin

Ready to put this into practice?

Whether you need an independent quality system audit or hands-on QMS consulting, Red Hen Admin can help — remote and on-site in Southern California.

Schedule an Audit →
View Services →

Leave a Comment

Scroll to Top