Skip to content
IC Inline Code
All posts

Privacy Act

The Privacy Act reforms changed the AI compliance baseline. Most organisations have not updated.

The Privacy and Other Legislation Amendment Act 2024 brought a statutory tort, expanded OAIC enforcement, and surfaced automated decision-making in legislation. The AI deployments most Australian organisations are running now sit under privacy obligations they were not designed for.

Mathew Sayed Mathew Sayed
· · 11 min read

The Privacy and Other Legislation Amendment Act 2024 was passed in late 2024 — the first substantive Privacy Act reform in over a decade. The amendments are commonly described as “tranche 1” of the reform package, with tranche 2 still in active consultation at the time of writing. The headline changes are real and material; the second-order consequences for AI deployments are what most Australian organisations have not yet absorbed.

This post is the practical version. What the 2024 amendments actually changed, where AI deployments now sit under the Australian Privacy Principles, what the OAIC has signalled through its 2024–2025 guidance, what tranche 2 likely brings, and the compliance path for an organisation with AI tooling already deployed under the prior baseline.

What the 2024 amendments changed (the parts that touch AI)

The amendment package is broad and we will not cover all of it. The provisions that matter most for AI deployments are these.

Statutory tort for serious invasions of privacy. The amending Act creates a new cause of action for individuals where their privacy has been seriously invaded, intentionally or recklessly, in circumstances where they had a reasonable expectation of privacy. This is a private right of action — individuals can sue, not only the OAIC. Damages are capped but not trivial. The relevance for AI: deployments that process personal information at scale, particularly where the AI inference produces a privacy-affecting decision the individual did not anticipate, become potential subject matter. The litigation risk is no longer purely regulatory.

Strengthened OAIC enforcement powers. The amendments increased the OAIC’s powers to issue infringement notices and undertake compliance investigations. The threshold for the OAIC to require an organisation to undertake a privacy impact assessment is lower; the powers to compel evidence are broader; the maximum civil penalty for serious or repeated interference with privacy was clarified at the higher end. AI deployments that were tolerable under the prior enforcement posture are not necessarily tolerable under the current one.

Automated decision-making provisions. The Act introduces transparency obligations for organisations that use personal information in substantially automated decisions affecting an individual. Privacy policies must disclose the kinds of personal information used, the kinds of decisions made, and the role of automation. The provisions commence on a delayed schedule — most organisations have until late 2026 to comply — but the obligation is concrete and the disclosure will need to be specific. “We use automated systems to enhance our service” is not the level of disclosure the OAIC has signalled it expects.

Children’s Online Privacy Code. The framework for a binding code on the handling of children’s personal information was created. The code itself is being developed. AI deployments that touch services accessed by under-18s — which is more services than the average organisation initially recognises — will fall under the code when it is in force.

Information sharing for emergencies and serious cyber incidents. The amendments expanded the lawful bases for information sharing in incident response. This is largely positive for organisations responding to cyber incidents but has implications for AI-assisted incident response (the AI tool inherits the lawful basis of the operator).

What the amendments did not do is also important. They did not introduce a fair-and-reasonable test (deferred to tranche 2). They did not remove the small business exemption (deferred to tranche 2). They did not introduce a comprehensive AI-specific provision in the way the EU AI Act does — the AI controls in the Privacy Act remain general principles applied to a specific technology.

How AI deployments now sit under the Australian Privacy Principles

The Australian Privacy Principles (APPs) were not rewritten for AI; the AI use cases are read against the existing APPs with the new disclosure and enforcement context attached. The four APPs that matter most for AI deployments are 1, 6, 11, and 12.

APP 1 — Open and transparent management. APP 1 requires you to publish a privacy policy that is up to date, accessible, and accurate about how you handle personal information. Post-2024, the policy must disclose substantially-automated decisions affecting individuals (the new transparency obligation). For AI deployments this is a structural change — the privacy policy now has to map the AI-touching workflows, identify which involve substantially-automated decisions, and disclose them with enough specificity that the individual can understand the role of automation.

The error most organisations make is updating the policy at the level of “we use AI” without specifying which decisions, what personal information feeds them, or how the individual can respond. The OAIC’s 2024 guidance signalled that this level of disclosure is unlikely to satisfy the obligation when it commences.

APP 6 — Use or disclosure. APP 6 limits the use of personal information collected for one purpose to that purpose, related purposes the individual would reasonably expect, or where another exception applies. AI deployments commonly use personal information across purpose boundaries — customer service data is summarised for sales analytics, support conversations train improvement models, contact records feed enrichment workflows. Each of these is potentially a use beyond the original purpose, and the “reasonably expect” exception is narrower for AI uses than most operators initially assume. The OAIC has been explicit that secondary use of personal information for AI training is rarely covered by the reasonable expectation exception.

APP 11 — Security and retention. APP 11 requires you to take reasonable steps to protect personal information from misuse, interference, loss, unauthorised access, modification, or disclosure. AI deployments expand the security surface in non-obvious ways — the personal information your CRM holds is now also held in your retrieval-augmented generation index, your fine-tuning dataset, and the audit log of every inference that referenced it. APP 11 applies to all of those copies. The retention obligation under APP 11.2 (destroy or de-identify personal information no longer needed) becomes harder to satisfy when the same record exists in a vector store, a model fine-tune, and a long-running conversation history.

APP 12 — Access. The individual’s right to access their personal information extends to the personal information held in AI-adjacent systems. If your customer’s email lives in your support system and a derived embedding lives in your vector store, both are within scope of an access request. Most organisations have not yet thought through how to fulfil an APP 12 request against their AI infrastructure. The vector store is not searchable by customer ID in the way the support system is.

OAIC AI guidance: what they have signalled

The OAIC has issued guidance through 2024–2025 that materially shapes the practical compliance bar. The two pieces that matter most are the guidance on commercially available AI products and the generative AI guidance, both of which the OAIC has indicated will be revised as the post-2024 framework beds in.

The OAIC’s guidance on commercially available AI products is explicit that organisations using commercial AI products (ChatGPT, Claude, Copilot, Gemini, and the rest) remain responsible for the personal information they place into those products. The vendor’s privacy commitments do not transfer the obligation. The “data does not leave the tenant” representations from major vendors satisfy some but not all APP obligations — APP 8 (cross-border disclosure) and APP 11.2 (retention and de-identification) are commonly under-met. The guidance recommends a privacy impact assessment before deployment, regardless of organisational size.

The generative AI guidance is more pointed. It covers training data lawfulness, output accuracy, and the de-identification question — the OAIC’s view is that personal information processed by a generative AI system retains its personal-information character even after generation, in many cases. The implication for organisations using third-party AI services to process personal information is that the de-identification defence — the “we anonymise before sending it to the AI” pattern — is narrower than typically argued.

The OAIC’s enforcement priorities, as signalled in their 2024–2025 corporate plan and subsequent commentary, include AI use of personal information without adequate privacy impact assessment, secondary use of personal information for AI training without consent, and inadequate disclosure of automated decision-making. Organisations operating in any of those zones are not facing a hypothetical risk.

What tranche 2 likely brings

Tranche 2 of the privacy reforms is in consultation at the time of writing. The provisions most likely to land affect AI deployments at a structural level.

Fair and reasonable test. A new overarching principle that personal information handling must be fair and reasonable in the circumstances. The fair-and-reasonable test would apply on top of the APPs — even where an APP is technically satisfied, the handling could still fail the fair-and-reasonable test. AI deployments that rely on technically-permitted secondary use are the obvious exposure.

Removal of the small business exemption. The current exemption for organisations under $3 million annual turnover may be removed or significantly narrowed. The compliance population for the Privacy Act would expand by an order of magnitude. Many small AI deployments — deployments that have grown larger than the deploying organisation expected — would fall into scope.

Expansion of the personal information definition. The definition could be aligned more closely with international peers (GDPR, UK Data Protection Act). For AI deployments this matters because the broader definition tends to capture inferred information and behavioural data more clearly, which is most of what AI systems generate.

Direct right of action. A statutory right for individuals to take action directly against organisations for serious privacy interferences (beyond the new tort) was canvassed in the consultation. If introduced, the litigation surface for AI deployments expands further.

The timing is uncertain — tranche 2 is unlikely to commence before late 2026 at the earliest, and the political timetable means the package may be modified or staged. The exposure for organisations with AI deployments is to design now to a posture that survives tranche 2 rather than retrofitting.

Where this intersects with APRA, ISO 42001, and the EU AI Act

The Privacy Act reforms are the second axis on which AI deployments are now governed in Australia. The first is the APRA prudential standards for regulated entities. The intersection points matter operationally because the same AI deployment is being assessed against multiple frameworks with different control vocabulary.

For an APRA-regulated entity, the Privacy Act and CPS 230 / CPS 234 sit alongside each other — the Privacy Act governs the personal information dimension, CPS 230 governs the operational risk dimension, CPS 234 governs the information security dimension. A single deployment is in scope of all three. The good news is that the underlying control set substantially overlaps; the bad news is that organisations frequently produce three different control documents that drift apart over time.

For an ISO 42001-aligned organisation, the Privacy Act obligations are typically captured in the Annex A controls on data management and transparency, but the ISO standard does not enforce them at the level the Privacy Act does. ISO is the management system; the Privacy Act is the legal floor.

For organisations also exposed to the EU AI Act, the Privacy Act and the AI Act differ in shape — the Privacy Act is principles-based and applies to personal information handling, the AI Act is risk-classification-based and applies to AI systems by category. A single deployment can be in scope of both, which is operationally common for organisations with EU-resident customers or staff. The control deduplication question is real and worth doing once.

The practical compliance path

For organisations with AI tooling already deployed, the practical compliance path under the post-2024 baseline is roughly the following.

Inventory the AI deployments and the personal information they touch. Most organisations cannot answer this question in writing. The first step is the discovery exercise — what AI tools are in use, what personal information do they process, what is the lawful basis, where is the data resident, and who is accountable. The shadow AI discovery post is the methodology for the first half; the data flow mapping is the second.

Privacy impact assessment for material deployments. The OAIC’s guidance is explicit that PIA is the expected control for AI deployments processing personal information. The PIA should cover the lawful basis under APP 6, the security posture under APP 11, the retention design under APP 11.2, the access response approach under APP 12, and the disclosure under APP 1. The structure of an AI-focused PIA is covered in the Digital Employees Runbook.

Update the privacy policy for the substantially-automated-decision disclosure. The disclosure obligation commences in 2026 but the policy revision is a useful forcing function for the inventory and PIA work. Specific, decision-type disclosure rather than the “we use AI” generality.

Vendor contracts. Existing vendor agreements are unlikely to address the post-2024 obligations cleanly. The clauses that matter most are: lawful basis for the vendor’s use of input data, retention and de-identification commitments, sub-processor controls, model versioning notification, audit rights, and assistance with access requests. Renegotiation at renewal is the cheapest path; mid-term amendments are achievable for material deployments.

Internal training. The 2024 amendments are not yet broadly understood inside organisations. The privacy team is usually current; the AI deployment teams (engineering, product, customer service) typically are not. A short briefing — what’s changed, what they need to do differently — closes most of the operational gap.

Board reporting on privacy and AI exposure. The board’s oversight obligation under the strengthened OAIC framework includes the AI-personal-information surface. The privacy-and-AI risk position should appear in the operational risk reporting at least half-yearly. The structure of this report is covered in our companion piece on board reporting for AI risk.

What this looks like in twelve months

An Australian organisation that takes the post-2024 baseline seriously has, by twelve months in:

  • A current inventory of AI deployments and the personal information each touches
  • A PIA on file for each material deployment, with the OAIC’s 2024 guidance reflected in the analysis
  • A privacy policy that meets the substantially-automated-decision disclosure obligation, drafted to a level of specificity that survives a regulatory review
  • Vendor contracts that cover the privacy obligations explicitly
  • Internal procedures for handling APP 12 access requests against AI infrastructure
  • Board oversight of the privacy-and-AI position with a defined cadence

If reading this surfaced gaps, the free readiness checklist is the one-page diagnostic and the runbook is the operating manual that closes them.

The Privacy Act is not the only governance vector for AI in Australia, but it is the one with the broadest reach — every organisation processing personal information is in scope, and the AI deployments most organisations are running already touch personal information. The compliance baseline has moved. The operational gap is, for most, larger than the privacy team initially estimated.

Get started

Bring AI risk under board oversight in two weeks.

A thirty-minute discovery call costs nothing. We confirm fit, scope, and timing, then issue a fixed-fee statement of work within two business days.