WormGPT presents itself as a black-hat alternative to GPT models, designed specifically for malicious activities, according to SlashNext. Credit: Tima Miroshnichenko Malicious actors are now creating custom generative AI tools similar to ChatGPT, but easier to use for nefarious purposes. Not only are they creating these custom modules, but they are also advertising them to fellow bad actors, according to a blog post by antiphishing company SlashNext. SlashNext gained access to a tool known as WormGPT through a prominent online forum that’s often associated with cybercrime. “This tool presents itself as a blackhat alternative to GPT models, designed specifically for malicious activities,” SlashNext said. WormGPT is an AI module based on GPT-J, an open-source large language model developed in 2021. Its features include unlimited character support, chat memory retention, and code formatting capabilities. WormGPT used in business email compromise attacks Cybercriminals use generative AI to automate the creation of compelling fake emails, personalized to the recipient, thus increasing the chances of success for the attack, according to SlashNext. “WormGPT was allegedly trained on a diverse array of data sources, particularly concentrating on malware-related data,” SlashNext said. The developer of WormGPT described it as the “biggest enemy of the well-known ChatGPT” that “lets you do all sorts of illegal stuff.” ChatGPT, the interactive chatbot developed by OpenAI, incorporates a number of safeguards designed to prevent it from encouraging or facilitating dangerous or illegal activities. This makes it less useful to cybercriminals, although with careful prompt design some of the safeguards can be overcome. SlashNext tested WormGPT by using it to generate an email intended to pressure an unsuspecting account manager into paying a fraudulent invoice. “The results were unsettling. WormGPT produced an email that was not only remarkably persuasive but also strategically cunning, showcasing its potential for sophisticated phishing and BEC attacks,” SlashNext said. Benefits of using generative AI for BEC attacks The use of generative AI democratizes the execution of sophisticated BEC attacks, according to SlashNext. This allows attackers with limited skills to use this technology, making it an accessible tool for a broader spectrum of cybercriminals. Generative AI can also create emails without grammar errors, making them seem legitimate and reducing the likelihood of being flagged as suspicious. In one of the advertisements observed by SlashNext on a forum, attackers recommended composing an email in one’s native language, translating it, and then feeding it into an interface like ChatGPT to enhance its sophistication and formality. “This method introduces a stark implication: attackers, even those lacking fluency in a particular language, are now more capable than ever of fabricating persuasive emails for phishing or BEC attacks,” SlashNext said. Jailbreaks for sale Along with development of dedicated generative AI tools for use in BEC attacks, SlashNext has also observed a cybercriminals offering “jailbreaks” for interfaces like ChatGPT. These specialized prompts enable users to disable the safeguards placed on mainstream generative AI tools by their developers. Last month, cybersecurity experts demonstrated the ability of ChatGPT and other large language models (LLMs) to generate polymorphic, or mutating, code to evade endpoint detection and response (EDR) systems. Google’s generative AI tool, Bard, could be an easier target than ChatGPT for jailbreakers. Earlier this week CheckPoint researchers said that Bard’s anti-abuse restrictors in the realm of cybersecurity are significantly lower than those of ChatGPT, making it easier to use Bard to generate malicious content. Earlier, Mackenzie Jackson, developer advocate at cybersecurity company GitGuardian, told CSOonline that the malware that ChatGPT can be tricked into producing is far from ground-breaking. However, Jackson said, as the models improve and consume more sample data, and as different products come onto the market, AI may end up creating malware that can only be detected by other, defensive, AI systems. Related content feature The inside story of Cyber Command’s creation Cartoons, Starbucks cards, and Hollywood storyboards: The ‘Four Horsemen of Cyber’ — CISA’s Jen Easterly, Lt. Gen. S.L. Davis, retired US Navy Vice Admiral T.J. White, and former NSA chief Paul Nakasone — revealed at RSA By Cynthia Brumfield May 20, 2024 8 mins Aerospace and Defense Industry CSO and CISO Military news analysis SEC rule for finance firms boosts disclosure requirements Amendments to Regulation S-P requires broker-dealers, investment companies, registered investment advisers, and transfer agents to disclose incidents to customers. By Evan Schuman May 17, 2024 5 mins Data Breach Financial Services Industry Data Privacy feature DDoS attacks: Definition, examples, and techniques Distributed denial of service (DDoS) attacks have been part of the criminal toolbox for over twenty years, and they’re only growing more prevalent and stronger. By Josh Fruhlinger May 17, 2024 10 mins DDoS Cyberattacks news FCC proposes BGP security measures Protecting the Border Gateway Protocol is as important as protecting the border. By Gyana Swain May 17, 2024 1 min Regulation Network Security PODCASTS VIDEOS RESOURCES EVENTS SUBSCRIBE TO OUR NEWSLETTER From our editors straight to your inbox Get started by entering your email address below. Please enter a valid email address Subscribe