Skip to content
IC Inline Code
All posts

Email security

AI-assisted phishing: what's actually new

The volume of AI-assisted phishing has gone up; the success rate per attempt has not changed as much as the headlines suggest. The substantive change is the resource asymmetry — and what it means for your defensive program.

Mathew Sayed Mathew Sayed
· · 5 min read

The reporting on AI-assisted phishing leans toward the catastrophic — AI makes every phisher fluent, every spoof perfect, every social engineering attack indistinguishable from legitimate communication. The reality, in the engagements where we’ve reviewed the actual attack patterns reaching mid-market clients in 2025, is more mundane and arguably more concerning for a different reason.

The success rate per phishing attempt has not gone up dramatically. The success rate per targeted attack — spear phishing, business email compromise, executive impersonation — has gone up modestly. The substantive change is the resource asymmetry: AI tooling has reduced the cost of producing a competent targeted attack from hours to minutes. The volume of targeted attacks is now the operational concern.

This post is the practical version: what’s actually changing, what to do about it, and what’s hype.

What’s actually new

Three substantive shifts:

1. Native-quality language. Phishing emails written in fluent, contextually appropriate English (or any other language). The “tell” of broken grammar that previously caught a portion of phishing is mostly gone. This affects both broad-cast phishing (now uniformly competent) and targeted phishing (now culturally and linguistically calibrated to the recipient).

2. Volume of targeted attacks. Spear phishing previously required research time per target — LinkedIn lookup, organisation chart inference, recent activity research, message tailoring. AI tooling now performs that research and drafts the personalised message in minutes. The economic constraint that limited targeted attacks to high-value targets has weakened. Mid-market staff and middle managers are increasingly subject to the kind of personalisation that was previously reserved for executives.

3. Voice and video quality. Voice cloning is genuinely good now — minutes of source audio produces convincing impersonation. Video impersonation in real-time meetings is harder but no longer hypothetical. CEO-impersonation scams via voice (sometimes called “vishing”) have demonstrably increased, with documented losses in Australian financial services.

What hasn’t changed:

  • The underlying social engineering patterns (urgency, authority, fear, plausibility).
  • The technical delivery mechanisms (email, SMS, phone, increasingly Teams/Slack).
  • The defences that work (process discipline, out-of-band verification, technical controls).

What this means for your defensive program

The implications for the defensive program are not novel — they are the same defences applied with more rigour.

Email security infrastructure

The baseline that should already be in place:

  • DMARC at p=reject for all owned domains. Most organisations are at p=none or p=quarantine; the gap is well-known but takes effort to close because it requires authenticating all legitimate sending sources first.
  • SPF and DKIM properly configured for all sending sources, including SaaS marketing and HR tools that are commonly forgotten.
  • External sender warnings rendered in the email client for any email from outside the organisation. M365 supports this natively, as does Google Workspace.
  • Look-alike domain monitoring. Continuous monitoring for new domain registrations that resemble your organisation’s domains. Several services do this for low cost.
  • Brand impersonation reporting. Documented procedure for staff and customers to report suspected phishing impersonating the organisation, with rapid takedown action via the relevant providers.

Process discipline for high-impact actions

The actions where AI-assisted impersonation is most dangerous: financial transactions, credential resets, sensitive data transfers, urgent operational decisions. The defence is procedural:

  • Out-of-band verification for any financial transaction or credential reset requested via email or message. Out-of-band means a different channel — not just a different email, but a phone call to a known number or an in-person verification.
  • Two-person authorisation for transactions above defined thresholds. The threshold should be calibrated to organisational risk tolerance.
  • Mandatory cooling-off periods for unusually urgent requests. I need this done immediately is, paradoxically, a stronger reason to slow down rather than speed up.
  • Documented escalation paths for unusual requests. Staff who receive a suspicious request know who to contact, and the contact is not via the same channel that delivered the suspicious request.

Awareness training that’s actually effective

Phishing awareness training has a poor track record overall. The training that does work shares characteristics:

  • Realistic, frequent, low-friction simulations rather than annual lectures. Staff encounter simulated phishing roughly monthly, with immediate feedback when they click.
  • Calibrated to current attack patterns. Simulations reflect what attackers are actually doing this quarter — including AI-quality content — not generic templates.
  • No punishment for clicking. Punishment causes hiding, not learning. Click-through is data; the data drives further calibration.
  • Reporting positive feedback loop. Staff who report suspicious emails get acknowledged and given context on whether the report was a true positive. This builds the reporting habit, which is the highest-leverage human behaviour in your defensive program.
  • Targeted training for high-risk roles. Finance, executives, IT administrators, and HR are higher-value targets. They get more frequent and more sophisticated training.

The training is necessary but it is not the primary defence. The primary defence is the technical and procedural infrastructure.

Authentication infrastructure for impersonation-sensitive workflows

Where impersonation has a credible attack path, technical authentication helps:

  • Code phrases or shared secrets for high-value voice transactions, particularly between executives and finance.
  • Video verification for unusual instructions, with awareness that real-time video impersonation is harder than voice but no longer impossible.
  • In-product approval flows that move sensitive actions out of email. Bank transfer requests inside the banking platform with proper approval; HR changes inside the HR system; not via email.
  • Cryptographic signing for organisation-wide announcements where impersonation has high value. (S/MIME or similar.)

What’s hype

Worth pushing back on:

  • “AI-generated phishing is undetectable.” It’s better, not undetectable. The detection still works at the technical layer (SPF/DKIM/DMARC misconfiguration on attacker domains, suspicious link destinations, sender reputation) — these are content-independent. The technical infrastructure does not become useless because the content is fluent.

  • “You need AI-detection-AI.” AI-based email security adds value but it’s not a category-changer. The fundamentals — DMARC, sender authentication, sandbox detonation, link rewriting — do most of the work. AI augments rather than replaces.

  • “This requires a budget overhaul.” For most mid-market organisations, the AI-assisted phishing era is best addressed by closing the existing-baseline gaps that pre-dated AI: DMARC at reject, MFA on every account, process discipline on financial transactions. If those aren’t done, AI tooling is not the limiting factor in your defensive posture.

A practical first move

This week, check the DMARC posture on your owned domains. Use a public DMARC checker or query the _dmarc.<your-domain> TXT record. If it’s p=none or absent, your domains are spoofable. The path to p=reject is well-documented and a few weeks of work for most organisations. Closing that gap delivers more practical defence against AI-assisted phishing than any AI-based email security addition.

For organisations doing this as part of broader email security work, the Security Posture Assessment reviews the full email security configuration alongside the broader infosec posture.

Get started

Bring AI risk under board oversight in two weeks.

A thirty-minute discovery call costs nothing. We confirm fit, scope, and timing, then issue a fixed-fee statement of work within two business days.