Regulatory horizon
The EU AI Act has reach. Australian firms should map exposure now.
The Act's territorial scope is broader than most Australian general counsel offices have appreciated. Two questions decide whether your AI deployment is in scope, and the documentation burden if it is, is non-trivial.
The EU AI Act applies to providers and deployers of AI systems established in the European Union, which is the framing most Australian firms initially read and dismiss. The article most general counsels skip is Article 2(1)(c), which extends the Act to providers and deployers established outside the Union where the output produced by the AI system is used in the Union. That is the operative provision for Australian firms, and the threshold for “used in the Union” is lower than commercial reciprocity.
The two questions that decide exposure for an Australian organisation are: does any output of your AI system end up acting on, communicating with, or being read by a person in the European Union; and is the AI system itself classified as high-risk under Annex III. If the answer to the first is yes, you are within territorial scope. If the answer to the second is yes, the documentation and conformity obligations apply in full. Even if not high-risk, transparency obligations under Article 52 still apply (this is the “you must disclose AI use” set), and the general-purpose AI obligations under Chapter V apply to anyone using foundation models above certain capability thresholds.
The exposure scenarios most relevant to Australian financial services and regulated mid-caps are narrower than feared but more concrete than usually appreciated.
EU customer-facing chatbots. If an Australian bank, insurer, or telco offers services to EU residents — even peripherally, even as part of an inbound visitor flow — and uses an AI system to respond to those customers, the system is in scope. Transparency obligations apply at minimum. If the chatbot is used for fraud screening, credit decisioning, or any of the Annex III categories, full conformity assessment applies.
Cross-border employee tooling. Australian firms with EU offices, even small ones, are deploying AI tools that process data and produce output consumed by EU staff. M365 Copilot, ChatGPT Enterprise, internal agents — all in scope as deployed. If used for hiring, performance management, or any HR Annex III category, the high-risk obligations apply.
API exposure to EU integrations. Australian SaaS firms with EU customers using the AI features of the platform are providers under the Act. The customer-facing transparency disclosure, the technical documentation under Article 11, and the post-market monitoring under Article 72 all apply. The EU customers will increasingly require evidence of compliance as part of their procurement, regardless of how Australian regulators interpret it.
The practical compliance posture for an Australian firm with any plausible EU exposure has three layers. First, an inventory of AI systems mapped against the Act’s risk classifications, with a default conservative position (assume in scope unless proven otherwise). Second, transparency disclosures wired into the relevant user touchpoints, which is mostly straightforward and can be reused across jurisdictions. Third, a documentation pack for any high-risk system covering the technical documentation requirements of Article 11, the risk management system of Article 9, and the data governance requirements of Article 10. This pack overlaps materially with what NIST AI RMF Govern function requires, and what ISO 42001 management system documentation produces, so for organisations that have done either, the marginal effort is moderate. For organisations that have done neither, the right starting point is ISO 42001 — it gives you the management system shell that the Act’s compliance evidence sits within.
The piece worth noting is that Australian firms have a regulatory horizon arriving regardless. The Department of Industry’s Voluntary AI Safety Standard is the reference for AU-specific expectations, and APRA’s CPS 230 brings AI-adjacent obligations under the operational risk umbrella. The frameworks substantially align — risk classification, governance, monitoring, incident reporting — so building the EU AI Act compliance stack is not strictly EU work. It is the same stack rebadged. The cost of doing it once and applying it to all jurisdictions is materially lower than the cost of treating each as a separate program.
For boards, the posture question is whether the firm intends to retreat from EU exposure (which is increasingly costly given customer expectations), to comply (which is achievable but has real documentation cost), or to accept the regulatory risk (which is hard to defend as a deliberate position once the Act is fully in force in 2027). Most firms we work with are in option two and have not yet realised it. The work is doable; the constraint is calendar time, and the calendar started running.
Continue reading
Related pieces
AI · Authorisation
MCP and the new authorisation surface nobody is reviewing
Model Context Protocol turns every internal API into a tool an agent can call on a user's behalf. The authorisation model most teams ship with is naïve, and the audit log usually proves it.
29 April 2026
Digital employees
Digital employees, with the governance attached
Why most digital employee deployments fail their first audit, and what a governance-first build looks like — identity, data access, supervision, and the accountability question almost no-one is answering well.
22 April 2026
APRA CPS 230
Mapping APRA CPS 230 to your AI tooling: a practical checklist
Translating CPS 230 material service obligations to Microsoft 365 Copilot, ChatGPT Enterprise, and Claude deployments — what changes when an AI vendor becomes a material service provider.
2 April 2026