Planning — Objectives, Risks, and the Fond Hope That You’ve Thought This Through

Welcome back. We have now arrived at Clause 6, which is where ISO 42001 stops describing your organisation and begins asking what you intend to do about it. This, in the parlance of management system standards, is called Planning. It sounds straightforward. It is not straightforward. Please, find a seat.

Clause 6 has two subclauses: 6.1, which concerns risks and opportunities, and 6.2, which concerns AI objectives. Together they form the standard’s first real ask of your organisation — not just that you understand your context (that was Clause 4), not just that leadership has signed off (that was Clause 5), but that someone has actually sat down and thought carefully about what might go wrong and what good looks like. Apparently, this requires a clause.

Clause 6.1 — Actions to Address Risks and Opportunities

When planning for the AI management system (AIMS), the organisation must consider the issues identified in Clause 4.1 — the internal and external context — and the requirements identified in Clause 4.2 — the interested parties. From this, the organisation must determine the risks and opportunities that need to be addressed to: ensure the AIMS achieves its intended outcomes, prevent or reduce undesired effects, and achieve continual improvement.

So far, so Annex SL. This exact structure appears in ISO 27001, ISO 9001, ISO 14001, and most other members of the management system family. If you’ve implemented any of those standards, you will feel an uncanny sense of déjà vu at this point. That is intentional. ISO’s High Level Structure is designed to let organisations bolt multiple standards together without reinventing the risk wheel every time.

Here, however, is where ISO 42001 quietly diverges from its siblings.

AI Risks Are Not Like Other Risks

In ISO 27001, risks are fundamentally about information assets: confidentiality, integrity, availability. In ISO 9001, risks are about product conformity and customer satisfaction. These are tractable, enumerable categories. You can build a spreadsheet. You can assign likelihood and impact scores. Everyone goes home satisfied.

ISO 42001 takes a more ambitious view. The risks and opportunities you are expected to identify include not just operational and security risks — though those are certainly present — but AI-specific risks that most organisations have never formally documented: algorithmic bias, unfair treatment of affected persons, opacity and inexplicability of AI outputs, erosion of human oversight, and societal-scale harms from AI systems that interact with millions of people. These are not risks you score on a five-by-five matrix and file away.

The standard does not prescribe a specific risk methodology, which is either admirably flexible or maddeningly vague, depending on how you feel about methodological freedom. What it does require is that you plan actions to address identified risks and opportunities, integrate those actions into your AIMS processes, and establish how you will evaluate whether those actions are actually effective. The effectiveness evaluation piece is worth underlining — it’s easy to plan, considerably harder to close the loop.

The Link to AI Impact Assessment

Clause 6.1 is designed to interact with the AI impact assessment concept that surfaces in Annex A of the standard (we will get there eventually; patience is itself a form of risk management). The intent is that your risk identification process feeds into a broader analysis of how your AI systems affect the people they touch — directly and indirectly. This is not a traditional IT risk question. It is closer, in spirit, to a data protection impact assessment under GDPR, but wider in scope and less prescriptive in methodology.

Organisations that have been running ISO 27001 programmes for years and assume Clause 6.1 is simply a familiar exercise will find, on closer reading, that the terrain has shifted underneath them. The question is no longer only what happens if the system fails? It is also what happens when the system works exactly as designed, and the design was wrong?

Clause 6.2 — AI Objectives and Planning to Achieve Them

The organisation must establish AI objectives at relevant functions, levels, and processes. These objectives must be consistent with the AI policy established under Clause 5.2 — so if your policy says your AI systems will be transparent and fair, your objectives had better reflect that — and they must be measurable where practicable, monitored, communicated, and updated as appropriate.

When planning how to achieve these objectives, the organisation must determine: what will be done, what resources are required, who is responsible, when it will be completed, and how results will be evaluated. This is, in essence, the standard’s way of asking you to write proper project plans and not just aspirational statements. The five-point framework (what, resources, who, when, evaluated how) is drawn directly from Annex SL and will look familiar to anyone who has set quality or environmental objectives before.

AI Objectives Are Not Just Internal KPIs

Here is the subtlety that organisations frequently miss: AI objectives in the sense of Clause 6.2 are not limited to internal operational goals like “reduce model deployment time by 20%” or “achieve 99.5% uptime on inference infrastructure.” The objectives must be consistent with the AI policy, which the standard explicitly links to responsible and ethical AI practices, affected person considerations, and stakeholder trust.

This means your AI objectives may need to include things like: measurable fairness thresholds for decision-making systems, transparency commitments with defined timelines, human oversight coverage rates for high-risk AI applications, or reduction targets for identified categories of harmful outputs. Suddenly the innocuous phrase “measurable where practicable” is doing rather a lot of work.

The clause also requires that objectives be communicated — which implies that someone in the organisation, beyond the compliance function, should be able to articulate what the AI programme is actually trying to achieve. This is, regrettably, often aspirational.

What Changed — What’s Actually New Here

The structure of Clause 6 will feel familiar. The content will not — or should not, if you’re reading carefully.

The genuinely novel elements are threefold. First, the explicit acknowledgement that AI risks include harms to affected persons, not just organisational risks. Most management system standards are essentially inward-facing: risks to the organisation, objectives of the organisation, improvement of the organisation. ISO 42001 introduces, through Clause 6 and the broader framework, an outward-facing obligation that existing risk programmes are structurally unprepared for.

Second, the requirement that objectives be tied to an AI policy that explicitly addresses ethical and societal dimensions means that AI objectives cannot be purely technical or commercial. You cannot set “minimise latency” as your only AI objective and call it compliant. The policy-objectives link is tighter here than in comparable standards.

Third, the evaluation of actions’ effectiveness is not optional. The standard requires you to plan, upfront, how you will know whether your risk-addressing actions actually worked. This is the part organisations most reliably skip in other standards, and ISO 42001 makes it structurally unavoidable.

Art Meta’s Obligatory Editorial Note

There is something quietly radical buried in Clause 6, beneath the familiar scaffolding of Annex SL. Most risk management programmes ask: what could go wrong? ISO 42001 also asks: what could go right in ways that harm people? — a question that technology organisations are, historically, somewhat reluctant to sit with. The standard does not prescribe how to answer it. It merely insists that you have. This is either admirable restraint or a convenient escape hatch, and the difference will become apparent only at audit time.

Up next: Clause 7 — Support. Resources, competence, awareness, communication, and documented information. The clause that sounds administrative and is, in fact, where most implementations quietly collapse. Do join us.

Leave a Comment

Scroll to Top