For years, one of the most reliable ways to spot a phishing email was to look for poor grammar, awkward phrasing, and obvious spelling mistakes. Security awareness programmes taught employees to treat badly written emails as red flags, and for good reason — many phishing campaigns originated from attackers who were not native speakers of the language they were writing in. That advice is now dangerously outdated.
TL;DR — Key Takeaways
- ✓Discover how criminals exploit large language models like ChatGPT to craft convincing phishing emails at scale, and how to adapt your defences
- ✓Explore the End of the Badly Written Phishing Email
- ✓Understand how Attackers Use AI to Personalise at Scale
Visual Overview
flowchart LR
A["Attacker Uses ChatGPT"] --> B["Generates Phishing Email"]
B --> C["Bypasses Spam Filters"]
C --> D["Victim Clicks Link"]
D --> E["Credentials Harvested"]
E --> F["Account Compromised"]
The rise of large language models (LLMs) such as ChatGPT, Claude, and their open-source equivalents has handed criminals a powerful new tool. These AI systems can generate polished, grammatically perfect text in any language, on any topic, in any tone — and they can do it in seconds. The result is a new generation of phishing emails that are virtually indistinguishable from legitimate business communications, and they are being deployed at a scale and speed that was previously impossible.
The End of the Badly Written Phishing Email
Traditional phishing emails often contained telltale errors that trained employees could spot. Misspelled company names, unusual sentence structures, inconsistent formatting, and generic greetings all served as warning signs. These imperfections existed because many phishing operations were run by individuals or groups who lacked fluency in the target language or who were working at such volume that quality control was impossible.
Large language models eliminate these problems entirely. An attacker can prompt an AI to write a professional email in perfect British English, American English, or any other variant. They can specify the tone — formal, friendly, urgent, or apologetic — and the AI will produce text that reads exactly like a message from a colleague, a supplier, or a bank. The spelling will be flawless, the grammar will be impeccable, and the formatting will match the conventions of professional correspondence.
This means that the traditional advice to look for language errors as a primary indicator of phishing is no longer sufficient. Organisations that rely on this approach alone are leaving their employees exposed to a threat that has evolved beyond their training.
How Attackers Use AI to Personalise at Scale
Perhaps the most significant advantage that LLMs give to attackers is the ability to create highly personalised phishing emails at massive scale. Previously, spear phishing — targeted attacks customised for a specific individual — required significant manual effort. An attacker would need to research the target, understand their role, identify their colleagues and business relationships, and then craft a bespoke email. This limited the number of spear phishing attempts any single attacker could launch.
With AI, this process can be largely automated. Attackers can feed publicly available information about a target — their LinkedIn profile, company website, recent social media posts, or industry news — into a language model and instruct it to generate a personalised email that references specific details of the target's professional life. The AI can produce dozens or hundreds of these customised messages per hour, each one tailored to a different individual.
Contextual Awareness That Defeats Suspicion
The personalisation goes beyond simply inserting a name. AI-generated phishing emails can reference genuine business events, industry trends, or recent company announcements to create a context that makes the email feel timely and relevant. For example, an attacker targeting an accountancy firm during tax season might use AI to generate emails that reference specific regulatory changes, upcoming filing deadlines, or common software tools used in the profession. The resulting message would read like a perfectly normal communication from a peer or service provider.
This level of contextual awareness makes it exceptionally difficult for recipients to distinguish malicious emails from legitimate ones based on content alone. The AI-powered phishing threat landscape has fundamentally shifted the balance of advantage towards attackers.
Why Traditional Email Detection Struggles
Many email security systems rely on pattern matching and known indicators of compromise to detect phishing. They scan for suspicious links, known malicious domains, blacklisted sender addresses, and common phishing templates. Some more advanced systems use natural language processing to identify language patterns associated with phishing attempts.
AI-generated phishing emails present a significant challenge to all of these approaches. Because each email is freshly generated, it will not match any known templates. The language is natural and varied, making it difficult for NLP-based detectors to flag it as suspicious. And when the attacker combines AI-generated text with a legitimate-looking sender domain (via email spoofing or compromised accounts), the email may pass through multiple layers of technical defence without triggering a single alert.
Polymorphic Phishing Campaigns
Attackers can use LLMs to create what security researchers call polymorphic phishing campaigns — campaigns where every email is slightly different in wording, structure, and style, but all carry the same malicious payload. This variation makes it extremely difficult for signature-based detection systems to identify and block the campaign, because there is no single template or pattern to match against. Each email is, in effect, unique.
The Multi-Language Advantage
Before AI, language was a significant barrier for many phishing operations. An attacker based in one country who wanted to target businesses in another would need to either learn the target language to a professional standard or recruit native speakers to write the phishing content. This limited the geographic reach of many campaigns.
Large language models are fluent in dozens of languages and can switch between them effortlessly. An attacker can generate phishing emails in German, French, Japanese, Portuguese, or any other major language with the same ease and quality as English. This means that businesses in non-English-speaking countries, which may have previously felt somewhat insulated from phishing attacks, are now equally vulnerable.
For multinational small businesses that communicate in multiple languages, this is particularly concerning. An attacker can target different employees in different languages, matching the communication norms of each office or region.
Adapting Your Training Programme for the AI Era
The emergence of AI-generated phishing demands a fundamental shift in how organisations train their employees. Traditional training that focuses primarily on spotting language errors needs to be updated to address the new reality. Here is how to adapt your approach:
Teach Behavioural Indicators Instead of Language Errors
Train employees to focus on the behaviour being requested rather than the quality of the writing. Regardless of how well an email is written, certain requests should always trigger suspicion:
- Urgency and time pressure: Any email that insists on immediate action or warns of consequences for delay should be treated with caution, regardless of how professionally it is written.
- Requests to bypass normal procedures: Legitimate business communications rarely ask recipients to skip approval steps, ignore verification protocols, or keep a transaction confidential.
- Unusual payment instructions: Any request to change bank details, pay a new supplier, or make a payment through an unfamiliar channel warrants independent verification.
- Requests for credentials or sensitive data: No legitimate service will ask for passwords, authentication codes, or sensitive personal information via email.
Emphasise Verification Over Detection
Since employees can no longer reliably detect phishing by reading the email alone, organisations should shift their emphasis from detection to verification. This means establishing clear, simple procedures for verifying any email that requests a sensitive action — such as picking up the phone and calling the supposed sender on a known number, or checking with a colleague before clicking a link.
Update Phishing Simulations
If your organisation runs phishing simulations, make sure the test emails reflect the current threat landscape. Simulations that rely on poorly written emails with obvious red flags will give your team a false sense of confidence. Modern simulations should include well-crafted, personalised messages that mirror the quality of AI-generated phishing attempts.
Technical Defences That Still Matter
While training is essential, technical controls remain a critical layer of defence. Several measures can help reduce the risk of AI-generated phishing reaching your employees:
- Email authentication protocols: Implementing DMARC, SPF, and DKIM helps prevent attackers from spoofing your domain and makes it harder for spoofed emails to reach your inbox.
- Advanced threat protection: Modern email security platforms use machine learning to analyse email metadata, sender behaviour, and link destinations in addition to content. These systems are more effective against AI-generated text than traditional pattern-matching filters.
- Link isolation and sandboxing: Technologies that rewrite URLs in emails and open them in isolated environments can prevent employees from reaching malicious websites, even if they click a link in a convincing phishing email.
- Multi-factor authentication (MFA): Even if an employee's credentials are compromised through a phishing attack, MFA provides an additional barrier that can prevent the attacker from accessing accounts.
Looking Ahead: Preparing for the Next Wave
The use of AI in phishing is not a passing trend — it is a permanent shift in the threat landscape. As language models continue to improve, the quality and sophistication of AI-generated phishing will only increase. Future developments may include AI systems that can engage in multi-turn email conversations, adapting their approach based on the recipient's responses, or that can automatically generate convincing phishing websites to accompany their emails.
For small businesses, the implications are clear. Security awareness training must evolve to match the sophistication of the threats employees face. The focus must shift from spotting obvious errors to questioning unusual requests, verifying identities through independent channels, and building a culture where healthy scepticism is valued and rewarded. The days when a misspelled word was your best defence are over — but with the right training and procedures in place, your team can still stay one step ahead of the attackers.