Cybercriminals Use WormGPT to Breach Email Security

K.C. Sabreena Basheer Last Updated : 19 Jul, 2023
4 min read

The ever-evolving landscape of cybercrime has given rise to new and dangerous tools. Generative AI, including OpenAI‘s ChatGPT and the notorious cybercrime tool WormGPT, are emerging as potent weapons in Business Email Compromise (BEC) attacks. These sophisticated AI models enable cybercriminals to craft highly convincing and personalized phishing emails, increasing the success rate of their malicious endeavors. This article delves into the mechanics of these attacks, explores the inherent risks of AI-driven phishing, and examines the unique advantages of generative AI in facilitating cybercrime.

Also read: Chinese Hack Microsoft Cloud, Goes Undetected for Over a Month

Generative AI tools, including ChatGPT and the notorious cybercrime tool WormGPT, are used for Business Email Compromise (BEC) attacks.

AI-Driven BEC Attacks: The New Threat on the Horizon

The proliferation of artificial intelligence (AI) technologies, particularly OpenAI’s ChatGPT, has opened up new avenues for cybercriminals to exploit. ChatGPT, a powerful AI model, can generate human-like text based on given inputs. This capacity allows malicious actors to automate the creation of deceptive emails personalized to the recipient, thereby enhancing the likelihood of a successful attack.

Also Read: Top 10 AI Email Automation Tools to Use in 2023

Unveiling Real Cases: The Power of Generative AI in Cybercrime Forums

In recent discussions on cybercrime forums, cybercriminals have showcased the potential of harnessing generative AI to refine phishing emails. One method involves composing the email in the attacker’s native language, translating it, and feeding it into ChatGPT to enhance its sophistication and formality. This tactic empowers attackers to fabricate persuasive emails, even if they lack fluency in a particular language.

Also Read: AI Discriminates Against Non-Native English Speakers

ChatGPT used in Business Email Compromise (BEC) attacks.

“Jailbreaking” AI: Manipulating Interfaces for Malicious Intent

An unsettling trend on cybercrime forums involves the distribution of “jailbreaks” for AI interfaces like ChatGPT. These specialized prompts manipulate the AI into generating output that may disclose sensitive information, produce inappropriate content, or execute harmful code. The growing popularity of such practices highlights the challenges in maintaining AI security against determined cybercriminals.

Also Read: PoisonGPT: Hugging Face LLM Spreads Fake News

Enter WormGPT: The Blackhat Alternative to GPT Models

WormGPT, a recently discovered AI module, emerges as a malicious alternative to GPT models designed explicitly for nefarious activities. Built upon the GPTJ language model, developed in 2021, WormGPT boasts features like unlimited character support, chat memory retention, and code formatting capabilities.

Also Read: ChatGPT Investigated by the Federal Trade Commission for Potential Harm

WormGPT: generative AI tool for cybercrime.

Unveiling WormGPT’s Dark Potential: The Experiment

Testing WormGPT’s capabilities in BEC attacks revealed alarming results. The AI model generated an email that was highly persuasive & strategically cunning, showcasing its potential for sophisticated phishing & BEC attacks. Unlike ChatGPT, WormGPT operates without ethical boundaries or limitations, posing a significant threat in the hands of even novice cybercriminals.

Also Read: Criminals Using AI to Impersonate Loved Ones

ChatGPT and WormGPT help in cybercrime.

Advantages of Generative AI in BEC Attacks

Generative AI confers several advantages to cybercriminals in executing BEC attacks:

  • Exceptional Grammar: AI-generated emails possess impeccable grammar, reducing the likelihood of being flagged as suspicious.
  • Lowered Entry Threshold: The accessibility of generative AI democratizes sophisticated BEC attacks, enabling even less skilled attackers to employ these powerful tools.

Preventative Strategies: Safeguarding Against AI-Driven BEC Attacks

To combat the rising threat of AI-driven BEC attacks, organizations can implement the following strategies:

  • BEC-Specific Training: Companies should develop comprehensive, regularly updated training programs to counter BEC attacks, emphasizing AI augmentation and attacker tactics. This training should be an integral part of employee professional development.
  • Enhanced Email Verification Measures: Strict email verification processes should be enforced, automatically flagging emails impersonating internal executives or vendors and identifying keywords associated with BEC attacks.

Also Read: 6 Steps to Protect Your Privacy While Using Generative AI Tools

Companies are urged to update their cyber security measures to stay safe from AI-powered cyber attacks.

Our Say

Generative AI, while revolutionary, has also opened new doors for cybercriminals to exploit. WormGPT’s emergence as a malicious AI tool exemplifies the growing need for robust security measures against AI-driven cybercrime. Organizations must stay vigilant and continuously adapt to evolving threats to protect themselves and their employees from the dangers of AI-driven BEC attacks.

Sabreena Basheer is an architect-turned-writer who's passionate about documenting anything that interests her. She's currently exploring the world of AI and Data Science as a Content Manager at Analytics Vidhya.

Responses From Readers

We use cookies essential for this site to function well. Please click to help us improve its usefulness with additional cookies. Learn about our use of cookies in our Privacy Policy & Cookies Policy.

Show details