Support — Resources, Competence, and the Paper Trail You Promised to Maintain

Welcome back. If you have been diligently following along, you have now endured the Scope (Clause 1), the Normative References (Clause 2), the Terms and Definitions (Clause 3), the Context of the Organisation (Clause 4), the Leadership theatre (Clause 5), and the Planning exercise (Clause 6). You are, against the odds, still here. I admire your stamina.

We have arrived at Clause 7 — Support. This is the clause that sounds like filler and is, in fact, where most AI management system implementations quietly collapse under their own weight. If Clause 6 asked whether you had thought about your AI programme, Clause 7 asks whether you have actually resourced it, staffed it with people who know what they are doing, told them why it matters, communicated it to the rest of the organisation, and written any of this down in a form an auditor can find on a Tuesday afternoon. These are, as it turns out, five separate questions. The standard devotes five subclauses to them, and so shall we.

Clause 7.1 — Resources

The organisation must determine and provide the resources needed for the establishment, implementation, maintenance, and continual improvement of the AI management system (AIMS). That is the entire text of 7.1, more or less, and it is more demanding than it looks.

Resources, in the Annex SL sense, mean the usual suspects: people, infrastructure, and budget. In the ISO 42001 sense, they also implicitly include the things AI programmes are perennially short of — computational capacity for model testing, access to representative data, evaluation environments, human review capacity for AI outputs, and specialists in fields like fairness assessment, explainability, and AI risk analysis. None of these are cheap, and most are not obvious line items on an IT budget.

The clause does not prescribe what adequacy looks like, which places a quiet burden on the organisation to make an honest assessment of whether its AI programme is under-resourced. Audit teams will, in practice, form a view on this question by looking at how often AI initiatives stall for lack of basic infrastructure, data access, or human oversight bandwidth. It is difficult to argue you have provided adequate resources when your incident backlog stretches into the middle distance.

Clause 7.2 — Competence

The organisation must determine the necessary competence of persons doing work under its control that affects AI performance, ensure those persons are competent on the basis of appropriate education, training, or experience, take actions to acquire the necessary competence where gaps exist, and retain documented information as evidence of competence.

The word doing considerable lifting here is determine. Before you can ensure anyone is competent, you must first have written down what competence looks like for each role in the AI programme. Which is to say: you need role descriptions that specify the knowledge, skills, and experience required to, say, review a model’s fairness evaluation, sign off a deployment, or field a complaint from an affected person. Most organisations do not have these. Most organisations have job descriptions written by HR three years ago that mention Python.

Competence under Clause 7.2 extends beyond the data science team. It includes anyone whose work affects AI performance — which, depending on how your systems are built, could encompass procurement (selecting model vendors), legal (reviewing AI-related contracts), operations (monitoring deployed systems), and senior managers who make decisions about when and where to use AI. Sorting out who is in scope is itself a non-trivial exercise, and one the standard expects you to have completed.

Clause 7.3 — Awareness

Persons doing work under the organisation’s control must be aware of the AI policy, their contribution to the effectiveness of the AIMS (including the benefits of improved performance), and the implications of not conforming with AIMS requirements.

This is the Annex SL three-part awareness triad, and it is cloned almost verbatim from ISO 9001 and ISO 27001. What makes it interesting in the AI context is the second limb — the individual’s contribution to AIMS effectiveness. In a traditional quality management system, contribution is usually framed around following procedures. In an AI management system, it increasingly touches on matters of judgement: when to escalate an unusual model output, when to pause a deployment, when to challenge an automated decision that feels wrong. Awareness training that fails to address these judgement calls is, frankly, decoration.

The implications-of-not-conforming limb is worth dwelling on briefly. In an AI programme, the implications are rarely limited to internal process breaches. They may include harm to affected persons, regulatory exposure, and the sort of reputational damage that outlives the executive team responsible for it. An awareness programme that soft-pedals this is not really awareness; it is comfort.

Clause 7.4 — Communication

The organisation must determine the internal and external communications relevant to the AIMS, including on what it will communicate, when, with whom, how, and who communicates. The five-part structure (what, when, with whom, how, who) will be familiar to anyone who has set up a communication plan under any other ISO standard in the last decade.

Where ISO 42001 tightens the screw is in the external communications limb. Affected persons — a term the standard takes seriously — may need to be informed that an AI system is being used in a decision that concerns them, told how to contest or appeal that decision, and given a route for raising concerns. Regulators, in an increasing number of jurisdictions, expect proactive disclosure of certain kinds of AI use. Partners and customers may require transparency commitments in contracts. A communications plan limited to internal newsletters and all-hands slides will not survive contact with any of this.

Clause 7.5 — Documented Information

Clause 7.5 is the longest subclause in Clause 7 and, I regret to inform you, the one you will spend the most time on. It divides into three parts.

7.5.1 General

The AIMS must include documented information required by the standard itself, plus any documented information the organisation determines is necessary for the effectiveness of the AIMS. The first category is mandatory; the second is where professional judgement enters. The extent of documented information varies with the size and complexity of the organisation, the complexity of processes and their interactions, and the competence of persons. There is no page-count floor. There is also no page-count ceiling, which is the part that frightens people.

7.5.2 Creating and Updating

When creating and updating documented information, the organisation must ensure appropriate identification and description (title, date, author, reference number), appropriate format and media (language, software version, graphics, paper or electronic), and appropriate review and approval for suitability and adequacy. Read carefully: the standard expects you to know not just what the document says, but who wrote it, when, in which version of which tool, and who signed off that it was fit for purpose. Version-controlled repositories are the minimum. Shared folders with files named ai_policy_final_FINAL_v3.docx are not.

7.5.3 Control of Documented Information

Documented information must be controlled to be available and suitable for use where and when it is needed, and adequately protected from loss of confidentiality, improper use, or loss of integrity. The organisation must address distribution, access, retrieval, use, storage, preservation, legibility, version control, and retention and disposition. Documented information of external origin that the organisation determines is necessary for the AIMS must also be identified and controlled.

In practice, this means your AI model cards, training data documentation, impact assessments, risk registers, training records, incident reports, and vendor documentation all need to live somewhere controlled, versioned, and retrievable — not scattered across three cloud storage providers, two wikis, and one engineer’s laptop.

What Changed — What’s Actually New Here

At first glance, Clause 7 looks like a direct copy-paste from ISO 9001 or ISO 27001. The structure is indeed Annex SL standard-issue. The content, however, shifts in three important ways.

First, competence under Clause 7.2 covers a wider and less-defined population than most organisations are accustomed to. An AI management system implicates procurement, legal, operations, customer support, and senior leadership in ways that a traditional information security management system does not. Most organisations do not have role-based competence criteria for these functions in an AI context. They will have to write them from scratch.

Second, communication under Clause 7.4 has a genuine external dimension — affected persons, regulators, partners — that other management system standards treat as optional. In ISO 42001 it is structurally required, and the standard’s definitions of affected persons and interested parties make the external communications obligation meaningfully broader than its ISO 27001 analogue.

Third, documented information under Clause 7.5 now includes artefacts that barely existed as formal categories five years ago: model cards, training data specifications, AI impact assessments, human oversight records, and affected-person communications. Implementing a compliant document control system for these is not a matter of copying your ISO 9001 procedure and changing the header.

Art Meta’s Obligatory Editorial Note

Clause 7 is the clause that separates organisations with an AI programme from organisations with an AI management system. The difference is mostly plumbing — resources, records, role descriptions, retention schedules — and plumbing, as any homeowner will tell you, is what keeps the house standing. Programmes that skip this clause, or paste in language from a quality manual written in 2011, tend to produce audit findings of the regrettable, remediation-heavy variety. You have been warned, albeit with affection.

Up next: Clause 8 — Operation. This is where the standard moves from planning and preparation into what you actually do day-to-day with your AI systems. It is also where the AI impact assessment finally makes its formal appearance. Bring coffee. Do join us.

Leave a Comment

Scroll to Top