APRA CPS 230
Mapping APRA CPS 230 to your AI tooling: a practical checklist
Translating CPS 230 material service obligations to Microsoft 365 Copilot, ChatGPT Enterprise, and Claude deployments — what changes when an AI vendor becomes a material service provider.
APRA’s Prudential Standard CPS 230 Operational Risk Management came into force on 1 July 2025. Eighteen months in, most regulated entities have a register of material service providers, a tolerance statement for their critical operations, and a scenario testing plan. What most do not have — yet — is a coherent answer for AI tooling.
This is the practical checklist we use when an APRA-regulated client asks: do my AI tools fall under CPS 230, and if so, what do I need to do?
When does an AI tool become a material service provider?
The standard’s threshold is whether disruption to the service would have a material impact on the entity’s ability to perform a critical operation, meet its obligations to APRA, or maintain financial soundness (CPS 230 paragraph 49). For AI tooling specifically, the questions worth running through:
- Does the tool process customer data, advice, or transactions? A Copilot deployment summarising customer correspondence into CRM notes meets the threshold the moment those notes feed underwriting or claims decisions.
- Is the tool embedded in a critical operation’s workflow? ChatGPT Enterprise used by the contact centre to draft responses is materially different from ChatGPT Enterprise used by marketing.
- What is the failover position? If staff cannot perform the work without the tool, you have a dependency. If they can fall back to manual processes within tolerance, you have a productivity tool.
Most enterprise AI deployments end up in the material service provider category once they are properly used. Few of them are registered there.
The seven things CPS 230 requires for a material AI service provider
- Due diligence before engagement. A documented assessment of the provider’s financial standing, operational capability, and ability to meet your CPS 230 obligations. For AI providers, this extends to model provenance, training data position, and sub-processor disclosure.
- A written agreement. Not just the click-through enterprise terms. CPS 230 paragraph 53 lists the minimum content — including audit rights, sub-contracting controls, and termination provisions.
- Ongoing monitoring. Regular review of performance, control effectiveness, and any material changes to the service. SOC 2 reports alone are not sufficient if your control objectives are AI-specific.
- Contingency arrangements. A documented position on what happens if the AI service is unavailable for the duration of your tolerance. For a one-hour tolerance, this is straightforward. For a four-hour tolerance, the answer is rarely we will use the tool’s redundancy — that is the provider’s contingency, not yours.
- Notification of material changes. Including changes to the underlying model, the data residency, or the sub-processor list. Most enterprise AI agreements have weak notification clauses; this is often where contract negotiation is required.
- Inclusion in your scenario testing. At least annually. The test should be plausible and severe — the model provider has a regional outage during peak is more useful than the API key was rotated incorrectly.
- Board-level oversight of the arrangement. Reflected in the Operational Risk Management Framework, with material concentrations reported to the board.
What this looks like for the three common deployments
Microsoft 365 Copilot
Copilot is, for CPS 230 purposes, a feature of an existing material service (Microsoft 365). The base agreement and DPA cover most of the requirements, but the AI-specific elements — the Azure OpenAI Service backend, the data boundary commitments, the prompt and response logging position — need to be reflected in your control objectives. Microsoft’s Customer Copyright Commitment is a contractual position, not a technical control; treat it accordingly.
ChatGPT Enterprise
OpenAI’s enterprise agreement is improving but still requires negotiation for CPS 230 compliance — specifically around audit rights, sub-processor notification, and the data residency position outside the United States and Europe. Most Australian deployments end up using the API via Azure OpenAI Service for CPS 230 reasons, even when the user-facing tool is ChatGPT.
Claude Enterprise
Anthropic’s enterprise agreement and the AWS Bedrock deployment path generally line up well with CPS 230 obligations, particularly for data residency in ap-southeast-2. The gap that remains is around model versioning — when Claude is upgraded, what is the notice period, and what does your testing requirement look like?
A short list of things to do this quarter
- Refresh the material service provider register with a deliberate review of AI tooling. Most registers were built before the AI tools were procured.
- Review the contractual position for each AI tool against the CPS 230 paragraph 53 minimums. Document the gaps and the mitigation position.
- Add at least one AI-specific scenario to the next operational risk scenario test. Suggest: the primary model provider experiences a four-hour regional outage.
- Bring the AI tooling concentration position to the next board operational risk report. If you are running three workflows on a single foundation model, that is a concentration the board should know about.
CPS 230 is not exotic when applied to AI tooling. It is the same operational risk discipline applied to a vendor category that grew up faster than most procurement processes did. The work is to catch the inventory up to where the standard already says it should be.
If the inventory or the scenario testing is where you are stuck, the posture assessment is a sensible next step — it produces both the inventory and the gap analysis as a fixed-fee deliverable.
Continue reading
Related pieces
Platform engineering · Risk
Policy-as-code as the control plane your auditors will actually read
Half the controls in a typical APRA submission are statements about what should happen. The other half could be expressed as policy code that runs in the pipeline and produces the evidence automatically. The gap between the two is most of the audit conversation.
28 February 2026
Incident response
Ransomware response, CPS 230, and the 24-hour decision
The technical incident response is the easier half. The harder half is the decision your executive will be asked to make at the 6-hour mark and again at the 24-hour mark, and whether your organisation has actually decided how to make it.
29 October 2025
Third-party risk
Third-party risk after the supply-chain attack era
Most third-party risk programs in mid-market financial services are questionnaire factories. They produce paperwork; they do not produce risk reduction. After several years of supply-chain incidents, the realistic position has changed — here's what actually works.
26 August 2025