Threat Intelligence 6 min read

AI-Powered Phishing Attacks in 2026: What Has Changed

Max, Technical Director·12 March 2026

The AI Phishing Explosion

Phishing has always been the most common initial attack vector, accounting for 36% of all data breaches according to Verizon's 2024 DBIR. What has changed is quality and scale. Research from SlashNext found a 1,265% increase in AI-generated phishing emails between 2023 and 2025. Harvard and MIT researchers demonstrated that LLM-generated phishing emails achieve a 78% open rate compared to 36% for human-crafted equivalents. The grammatical errors, awkward phrasing, and cultural mismatches that used to betray phishing attempts have been eliminated by AI.

Deepfake Voice and Video Attacks

In February 2024, a finance worker at Arup in Hong Kong paid out $25 million after a video call with what appeared to be the company's CFO and several colleagues — all deepfakes. In the UK, the NCSC warned in January 2025 that deepfake voice phishing (vishing) attacks had tripled year-on-year. Attackers clone a CEO's voice from earnings calls, investor presentations, or LinkedIn videos, then call finance teams requesting urgent transfers. Existing email security filters cannot protect against voice-based social engineering.

Automated Spear-Phishing at Scale

Traditional spear-phishing was expensive — attackers had to manually research each target. AI has eliminated that bottleneck. Tools can now scrape LinkedIn profiles, company websites, and press releases to generate personalised emails referencing real projects, colleagues, and events. A single operator can generate thousands of unique, highly targeted spear-phishing emails per day. Microsoft's Digital Defense Report 2024 recorded 4,000 password attacks per second, with AI-assisted phishing contributing to a 40% increase in credential theft campaigns.

How to Defend Against AI Phishing

Traditional secure email gateways rely on known signatures and sender reputation. These are insufficient against AI-generated, novel content from compromised legitimate accounts. You need AI-powered defence to match AI-powered attacks. Coro's email security uses machine learning to analyse writing patterns, context, and behavioural anomalies rather than relying solely on signatures. Multi-factor authentication remains critical — even when credentials are compromised, MFA stops the account takeover. Security awareness training must be updated to include deepfake voice and video scenarios.

  • Deploy AI-powered email security that analyses behavioural patterns
  • Enforce MFA on all accounts — phishing-resistant FIDO2 keys where possible
  • Train staff on deepfake voice and video recognition
  • Implement out-of-band verification for financial transactions
  • Use ADX to prevent data exfiltration even if phishing succeeds

Frequently Asked Questions

How effective is AI-generated phishing?

Research from Harvard and MIT showed that LLM-generated phishing emails achieve a 78% open rate, compared to 36% for human-crafted equivalents. The emails are grammatically perfect, culturally appropriate, and highly personalised.

What is deepfake phishing?

Deepfake phishing uses AI-generated voice or video to impersonate trusted individuals. Attackers clone voices from public recordings and use them in phone calls or video conferences to authorise fraudulent transactions. Arup lost $25 million to a deepfake video call in 2024.

Can email filters stop AI phishing?

Traditional email filters that rely on signatures, known malicious domains, and sender reputation struggle against AI phishing. AI-generated emails use novel content and often come from compromised legitimate accounts. You need AI-powered email security that analyses writing patterns and behavioural context.

phishingai threatsdeepfakesocial engineeringspear phishing

Want to discuss this with our team?

Book a free 20-minute call with David or Max.

Book a call