Board reporting
Reporting AI risk to the board: a one-page position summary that actually works
What the board actually wants on the AI risk page is the answer to four specific questions. Most AI risk reports answer different questions. Here is the structure that lands, four worked examples by sector, and a template you can lift verbatim.
When the board chair asks for a paper on the organisation’s AI risk position, the question they are asking is not the question most AI risk reports answer. The report that lands is short, structured, and answers four specific questions. The report that doesn’t land — most do not — is comprehensive, frame-heavy, and answers the question “what are all the things that could go wrong with AI?” rather than the question the board actually wants answered.
This piece is the structure of the report that lands, the four worked examples, the cadence question, and the template you can lift verbatim. It is the template our fractional officer engagements produce for client boards.
The four questions the board is actually asking
The board’s underlying question, expressed across thousands of variations of board paper request, is some combination of the following four:
1. Are we exposed? Where is AI being used in this organisation, by whom, for what, and is the use within our risk appetite. The board needs the inventory at a level it can comprehend — usually three or four categories with named sponsors — and a position on whether the inventory is complete.
2. Is there a regulator question? What rules apply to our AI use, are we aligned, and where are we gapping. The board does not want the framework taxonomy; it wants the position. Aligned, aligned with caveats, gapping with a remediation plan, gapping without one.
3. What could go wrong, and what would happen? Three to five specific scenarios with the first-order impact and the controls in place. Not a risk register dump; the scenarios that, if they occurred, the board would be asked to explain.
4. What are we doing about it? The current state of the program, what the next quarter delivers, and what the board is being asked to decide.
The report that lands answers those four questions, in that order, on one page. The four-question structure is recognisable to a board member because it maps to how they already think about every other operational risk — exposure, regulatory position, scenarios, response.
The five elements that belong on the page
The page itself has five elements. We have iterated this template across half a dozen mid-market boards; the elements that survive every iteration are these.
1. The position summary (top of page, one paragraph). A single paragraph answering: where the organisation sits today, what changed since the last reporting period, and what the board is being asked to do. This is the only paragraph the chair will read with certainty. Spend the time on it.
2. The deployment inventory (one short table). Three or four rows. Each row is a category of AI use (not a tool — categories: customer-facing AI, internal productivity AI, decision-support AI, agentic AI). Each row has the named accountable executive, the in-scope frameworks, and a one-word status (in scope and aligned, in scope with gaps, out of scope and confirmed). The tool-level inventory belongs in an appendix; the board sees the categories.
3. The regulatory position (one short paragraph per applicable framework). Two to four short paragraphs, one per framework that applies — APRA CPS 234, CPS 230, the Privacy Act post-2024, ISO 42001 if implementing, EU AI Act if exposed. Each paragraph answers: where do we stand, what are the gaps, what is the remediation timeline.
4. The scenario position (three to five rows). The scenarios that, if they occurred this quarter, the board would be the audience for the response. Each row: a one-sentence scenario, the first-order impact (financial, regulatory, reputational), and the current control posture. This is the closest the report comes to a traditional risk register and it is the part that boards actually engage with.
5. The forward look (one short paragraph and one decision row). What the program delivers in the next quarter, and the specific decisions the board is being asked to make this period. Decisions are the board’s work; if the report does not surface them, it is descriptive rather than governance.
What does not belong on the page: framework taxonomies, control catalogues, vendor inventories, technical architecture, training metrics, NPS scores from AI deployments. Those belong in the appendix or in the operational reporting layer underneath. The page is for the four questions and the five elements; everything else is detail that the board member who wants it can request.
Four worked examples by sector
The four sectors we work in most often produce reports that look different in their specifics but share the structure. The worked examples below are stylised; specifics in real reports vary by organisation.
Financial services (APRA-regulated, AUD 1–3 billion in funds under management)
Position summary. AI deployments are concentrated in customer service automation (Microsoft Copilot for SharePoint and Outlook), internal analytics (Claude via the data team), and a pilot agentic deployment in operations (a customer onboarding agent under supervision). CPS 230 material service provider register has been updated to include the major foundation model vendors as Tier 2. The CPS 234 audit is current with no material findings on AI scope. Privacy Act post-2024 disclosure obligation is being drafted for the policy refresh next quarter. The board is asked to approve the proposed risk appetite addendum on the agentic pilot.
Inventory. Customer-facing AI (CRO, in scope and aligned). Internal productivity AI (CTO, in scope and aligned). Decision-support AI (Head of Operations, in scope with gaps). Agentic AI (Head of Operations, in scope with gaps).
Regulatory position. CPS 234 — aligned; current independent audit issued recently, no material findings on AI scope. CPS 230 — aligned with caveats; the model versioning notification clause is being negotiated with two vendors. Privacy Act — gapping; the substantially-automated-decision disclosure work is in flight.
Scenarios. (1) Prompt injection on the customer onboarding agent results in unauthorised account modification — first-order impact moderate financial, high regulatory; controls in place are scoped permissions, supervisor review, audit log monitoring. (2) Vendor model update materially changes the agent’s behaviour without notice — first-order impact moderate financial, low regulatory immediately, high regulatory if pattern repeats; controls in place are version pinning where supported, evaluation suite, model-update review process. (3) Personal information in prompts retained beyond intended period in vendor’s logs — first-order impact moderate regulatory, low financial; controls in place are zero-retention contracts where available, prompt sanitisation, vendor audit rights. (4) APRA review identifies the agentic deployment as outside the current ICAAP coverage — first-order impact low financial immediately, moderate regulatory; control in place is the proposed risk appetite addendum at this meeting.
Forward look. Next quarter delivers the privacy policy refresh, the second tranche of vendor renegotiations, and the move from pilot to limited production for the onboarding agent under expanded supervision. Board decision required: approval of the risk appetite addendum (separate paper attached).
Professional services (ASX 200 supplier, ~200 staff)
Position summary. AI is broadly deployed in document drafting (Copilot, Claude), legal research (vendor-specific tools), and a recent automation pilot in internal IT support. Privacy Act post-2024 obligations are now the lead regulatory consideration; CPS 234 does not apply but the firm’s clients increasingly do require evidence of AI controls equivalent to their own (the third-party-risk inheritance pattern). ISO 42001 readiness work commenced earlier this quarter; readiness assessment is due in Q4. The board is asked to note the program status; no decisions sought this period.
The remaining sections — inventory, regulatory position, scenarios, forward look — follow the same five-element structure with the specific scenarios the firm has identified. The recurring scenarios at this scale tend to be: confidential client information processed by AI without an enforceable contractual basis; output errors in client-facing work product; staff-member misuse of consumer-grade AI tools. Each maps to a control set and a remediation plan in the forward look.
Healthcare (private hospital network, ~1,500 staff)
Position summary. AI deployments fall into clinical-adjacent (no clinical decision-making AI; transcription and summarisation only), administrative (Copilot deployment across corporate functions), and research (universities collaborating on de-identified data). My Health Record interactions are out of scope. The most material exposure is the Privacy Act post-2024 obligations applied to clinical-adjacent transcription where personal health information is processed. The board is asked to approve the proposed personal health information handling addendum to the AI policy.
The recurring scenarios at this scale tend to be: clinical transcription output reaching the wrong patient record; vendor’s training use of de-identified data drawing personal-information inference; AI assistance to clinical decision-making by a staff member outside the policy. The healthcare context adds the Notifiable Data Breaches scheme as a hard floor on response times for any scenario that crystallises.
Government supply chain (technology vendor to federal government, ~80 staff)
Position summary. AI use is internal only (Copilot, Claude for engineering and document work). The material risk is contractual — the federal contracting baselines now require evidence of AI governance equivalent to the IRAP/PSPF posture. The current ISM-aligned controls cover most, but not all, of the AI-specific obligations the agency is now embedding in supplier contracts. ISO 42001 alignment work is underway to satisfy the contractual requirement at the next contract refresh. The board is asked to approve the additional spend on the ISO 42001 work in advance of the contract negotiation.
The recurring scenarios at this scale tend to be: agency data processed by AI without prior agency approval; contractor staff using personal-account AI tools on agency work; an audit finding from the agency surfacing a control gap that would put the contract at risk.
The pattern across all four: the report is short, sector-specific in its scenarios but identical in its structure, and ends with a decision the board is being asked to make. Reports without a decision row drift; reports with one focus the conversation.
Cadence and triggers
The default cadence for the AI risk position summary is quarterly, presented to the audit and risk committee, with a half-yearly version that goes to the full board.
Out-of-cadence triggers — the report is updated and re-presented within ten business days when:
- A material AI incident occurs (the definition of material should be in the AI policy)
- A regulator issues guidance or enforcement action that materially shifts the position
- A material AI deployment is approved or comes into production
- A material change is made to a current AI deployment (model version change, scope expansion, vendor switch)
- The fair-and-reasonable test or other tranche 2 Privacy Act provisions commence
The trigger list is the discipline. Without it, the report becomes calendar-driven and the board is briefed on stale information; with it, the board has the position when they need it.
The template you can lift verbatim
The structure below is the template we deploy. It is intentionally stripped of organisation-specific content; you can lift it as-is, adjust the framework references to match your context, and have a defensible report shape on the first iteration.
ORGANISATION X
AI RISK POSITION SUMMARY — [PERIOD]
POSITION SUMMARY
[One paragraph: where we stand, what changed since the last report,
and what the board is being asked to do.]
DEPLOYMENT INVENTORY
| Category | Owner | Frameworks | Status |
| ------------------------- | ------- | ------------ | ----------------------- |
| Customer-facing AI | [name] | [frameworks] | [aligned / gaps / out] |
| Internal productivity AI | [name] | [frameworks] | [aligned / gaps / out] |
| Decision-support AI | [name] | [frameworks] | [aligned / gaps / out] |
| Agentic AI | [name] | [frameworks] | [aligned / gaps / out] |
REGULATORY POSITION
[Framework 1] — [aligned / aligned with caveats / gapping]
[One paragraph on the position, gaps, and remediation timeline.]
[Framework 2] — [...]
SCENARIO POSITION
| Scenario | First-order impact | Controls in place |
| ------------- | --------------------------- | ----------------- |
| [Scenario 1] | [financial/reg/rep level] | [controls] |
| [Scenario 2] | [...] | [...] |
| [Scenario 3] | [...] | [...] |
| [Scenario 4] | [...] | [...] |
| [Scenario 5] | [...] | [...] |
FORWARD LOOK
[One paragraph: what the program delivers in the next quarter.]
BOARD DECISIONS REQUESTED
| Decision | Recommendation |
| ------------- | -------------------- |
| [Decision 1] | [recommended action] |
PRESENTED BY: [name]
PERIOD: [date range]
NEXT REVIEW: [scheduled date or trigger]
The template fits on one page when populated. If it spills onto a second page, the first thing to cut is the deployment inventory (move tool-level detail to the appendix), then the regulatory position (compress to status only), then the scenarios (cap at three for board, five for risk committee).
The longer-form supporting material — the AI policy, the risk register, the framework gap analyses, the vendor list — sits in a separate pack that is available to board members who want it but not pre-read.
What the report is not
The report is not the AI strategy. The strategy is a separate document, presented annually, that the report tracks against. The report is also not the operational risk register; the register is the source-of-truth that the report distils from. And the report is not a substitute for the legal advice that some AI scenarios require — the report flags the regulatory position; the legal advice underneath sits with the general counsel.
The board reports we see fail are usually one of three failures: too long (twenty pages of frameworks), too vague (no scenarios, no decisions, no position), or too late (presented quarterly with no out-of-cadence triggers, so the board hears about an incident from the press). The four-question structure, the one-page constraint, and the trigger list are the three things that solve all three failures.
When you might want help
The report is the deliverable; the work that produces a defensible report is the operating program underneath. For organisations early in the AI governance build-out, the posture assessment produces the inventory, the framework gap analysis, and the first iteration of the report. For organisations with the program built but the reporting layer not yet operating, the fractional officer engagement is the named individual producing the report and presenting it to the board on the defined cadence. The Digital Employees Runbook covers the longer-form board reporting templates for organisations doing the build-out themselves, including the agentic-specific content that this template treats as one row.
The free readiness checklist includes the board reporting line as one of the eighteen checks. If the answer is “we don’t have a position summary yet” — most organisations cannot answer it on first ask — the template above is the one-evening starting point.
The report that lands is short, structured, and ends with a decision. The work that makes it possible is the program that runs underneath it. The two are complementary; without the program the report is a fiction, and without the report the program is invisible to the board.
Continue reading
Related pieces
Privacy Act
The Privacy Act reforms changed the AI compliance baseline. Most organisations have not updated.
The Privacy and Other Legislation Amendment Act 2024 brought a statutory tort, expanded OAIC enforcement, and surfaced automated decision-making in legislation. The AI deployments most Australian organisations are running now sit under privacy obligations they were not designed for.
2 May 2026
ISO/IEC 42001
ISO 42001 readiness for mid-market organisations
What an ISO/IEC 42001 management system actually requires, and what it does not, for organisations under 500 staff. A pragmatic readiness path that does not require a dedicated AI governance team.
18 February 2026
Regulatory horizon
The EU AI Act has reach. Australian firms should map exposure now.
The Act's territorial scope is broader than most Australian general counsel offices have appreciated. Two questions decide whether your AI deployment is in scope, and the documentation burden if it is, is non-trivial.
2 December 2025