Americas

  • United States

Asia

Oceania

Apurva Venkat
Special Correspondent

WormGPT: a generative AI tool to compromise business emails

News
Jul 17, 20234 mins
Email SecurityGenerative AI

WormGPT presents itself as a black-hat alternative to GPT models, designed specifically for malicious activities, according to SlashNext.

laptop_user_email
Credit: Tima Miroshnichenko

Malicious actors are now creating custom generative AI tools similar to ChatGPT, but easier to use for nefarious purposes. Not only are they creating these custom modules, but they are also advertising them to fellow bad actors, according to a blog post by antiphishing company SlashNext. 

SlashNext gained access to a tool known as WormGPT through a prominent online forum that’s often associated with cybercrime.

“This tool presents itself as a blackhat alternative to GPT models, designed specifically for malicious activities,” SlashNext said. 

WormGPT is an AI module based on GPT-J, an open-source large language model developed in 2021. Its features include unlimited character support, chat memory retention, and code formatting capabilities.

WormGPT used in business email compromise attacks

Cybercriminals use generative AI to automate the creation of compelling fake emails, personalized to the recipient, thus increasing the chances of success for the attack, according to SlashNext. 

“WormGPT was allegedly trained on a diverse array of data sources, particularly concentrating on malware-related data,” SlashNext said.

The developer of WormGPT described it as the “biggest enemy of the well-known ChatGPT” that “lets you do all sorts of illegal stuff.”

ChatGPT, the interactive chatbot developed by OpenAI, incorporates a number of safeguards designed to prevent it from encouraging or facilitating dangerous or illegal activities. This makes it less useful to cybercriminals, although with careful prompt design some of the safeguards can be overcome.

SlashNext tested WormGPT by using it to generate an email intended to pressure an unsuspecting account manager into paying a fraudulent invoice.

“The results were unsettling. WormGPT produced an email that was not only remarkably persuasive but also strategically cunning, showcasing its potential for sophisticated phishing and BEC attacks,” SlashNext said. 

Benefits of using generative AI for BEC attacks

The use of generative AI democratizes the execution of sophisticated BEC attacks, according to SlashNext. This allows attackers with limited skills to use this technology, making it an accessible tool for a broader spectrum of cybercriminals.

Generative AI can also create emails without grammar errors, making them seem legitimate and reducing the likelihood of being flagged as suspicious.

In one of the advertisements observed by SlashNext on a forum, attackers recommended composing an email in one’s native language, translating it, and then feeding it into an interface like ChatGPT to enhance its sophistication and formality.

“This method introduces a stark implication: attackers, even those lacking fluency in a particular language, are now more capable than ever of fabricating persuasive emails for phishing or BEC attacks,” SlashNext said. 

Jailbreaks for sale

Along with development of dedicated generative AI tools for use in BEC attacks, SlashNext has also observed a cybercriminals offering “jailbreaks” for interfaces like ChatGPT. These specialized prompts enable users to disable the safeguards placed on mainstream generative AI tools by their developers. 

Last month, cybersecurity experts demonstrated the ability of ChatGPT and other large language models (LLMs) to generate polymorphic, or mutating, code to evade endpoint detection and response (EDR) systems.

Google’s generative AI tool, Bard, could be an easier target than ChatGPT for jailbreakers. Earlier this week CheckPoint researchers said that Bard’s anti-abuse restrictors in the realm of cybersecurity are significantly lower than those of ChatGPT, making it easier to use Bard to generate malicious content.

Earlier, Mackenzie Jackson, developer advocate at cybersecurity company GitGuardian, told CSOonline that the malware that ChatGPT can be tricked into producing is far from ground-breaking. However, Jackson said, as the models improve and consume more sample data, and as different products come onto the market, AI may end up creating malware that can only be detected by other, defensive, AI systems.

Apurva Venkat
Special Correspondent

Apurva Venkat is principal correspondent for the India editions of CIO, CSO, and Computerworld. She has previously worked at ISMG, IDG India, Bangalore Mirror, and Business Standard, where she reported on developments in technology, businesses, startups, fintech, e-commerce, cybersecurity, civic news, and education.

More from this author