As AI and LLMs take over the internet, they’ve made their way over towards Darknet forums where users discuss ways to abuse the new tools available. The surge in darkweb discussions saw over 3,000 posts regarding illicit uses and versions of ChatGPT. Naturally this opened doors to new AI-powered tools for a range of malicious activities, from malware creation to fraudulent schemes. Let’s look at how much AI played a role in the Dark Web in 2023.
Quick Facts
- Approximately 3,000 dark web discussions related to the misuse of ChatGPT and LLMs were recorded.
- Discussions included creating alternative jailbreaking techniques and the misuse of chatbots.
- The peak of these discussions occurred in March 2023, with ongoing interest from January to December 2023.
- A study found a 1,265% increase in phishing emails since ChatGPT’s release in 2022.
- Tools like WormGPT, XXXGPT, and FraudGPT, marketed as unrestricted alternatives to ChatGPT, saw an increase in discussions.
- OpenAI recently suspended a developer for creating a chatbot that imitated a U.S. Congressman.
- In 2023, 249 offers to distribute and sell malicious prompts were found on the dark web.
- The WormGPT project shut down in August 2023 due to community backlash, but fake advertisements for access continue.
- Competition among language model tools is rising with projects like xxxGPT, WolfGPT, FraudGPT, and DarkBERT.
Security researchers at Kaspersky Digital Footprint Intelligence observed a notable spike in darknet discussions regarding the illicit use of ChatGPT and large language models (LLMs).
ChatGPT is a language processing tool powered by AI technology and owned by OpenAI. The tool allows users to hold human-like conversations with the chatbot. Additionally, the language model has the ability to assist users with tasks such as composing emails, essays, and code.
Nearly 3,000 dark web discussions relating to the use of ChatGPT and large language models (LLMs) for malicious activities. Other discussions related to the creation of alternative techniques for jailbreaking and the misuse of chatbots.
While the peak of discussion on forums like Dread, occurred in March 2023, ongoing discussion from January to December 2023 highlights the interest in malicious actors seeking to exploit AI-powered technology for illicit use.
Research also noted an increase in discussions on the dark web related to tools like WormGPT, XXXGPT, and FraudGPT. These large language models have fewer restrictions than ChatGPT and are marketed as alternatives.
Kaspersky’s report comes days after OpenAI announced the suspension of one of the platform’s developers. The developer was cited for creating a chatbot that imitated a U.S. Congressman. The organization noted that the act “violated its rules on political campaigning or impersonating individuals without consent.”
In addition, stolen ChatGPT accounts on the dark web remain a hot commodity, with over 3,000 posts advertising the sale of accounts.
How Threat Actors Exploit ChatGPT
ChatGPT’s conversation abilities make it a prime target for malicious actors. Hackers exploit the capacity of the language tool to generate and understand human-like text to deceive users into fraudulent schemes.
Kaspersky’s digital footprint analyst, Alisa Kulishenko, emphasized that “The popularity of AI tools has led to the integration of automated responses from ChatGPT or its equivalents into some cybercriminal forums. “
A string of discussions on the dark web indicates that threat actors are contemplating the possibilities of using the platform for illegal activities such as creating polymorphic malware to modify its code while maintaining its basic functionality.
Research from Kaspersky noted that discussions on the dark web “frequently include the development of malware and other types of illicit use of language models, such as processing of stolen user data, parsing files from infected devices, and beyond.”
Security researchers argue that it’s becoming rather common for attackers to use “ChatGPT to develop malicious products or to pursue illegal purposes.” Actions that previously required a team of experts are now solved with a single prompt.
Furthermore, discussion findings on the dark web highlight that cybercriminals are actively sharing information on how to exploit ChatGPT. In 2023, security researchers found 249 offers to distribute and sell prompts on the dark web.
Kulishenko added, “Threat actors tend to share jailbreaks via various dark web channels – special sets of prompts that can unlock additional functionality – and devise ways to exploit legitimate tools, such as those for pen-testing, based on models for malicious purposes.”
In addition, security analysts discovered information on the dark web related to the popular “Do Anything Now” (DAN) jailbreak for ChatGPT. The jailbreak was designed to get around OpenAI’s content moderation policy.
Dark Web Versions of ChatGPT
Projects like WormGPT, XXXGPT, and FraudGPT are gaining a considerable amount of attention. These large language models (LLMs) are marketed as a ChatGPT analog without any limitations but still possess functionality.
The increasing attention demanded by these language tools is a potential problem for ChatGPT developers. The WormGPT project shut down in August 2023, due to a community backlash.
Despite the closure of the project, fake advertisements offering fee-based access persist. Malicious attackers are not only using this language tool for their intention but have created phishing pages claiming false trials.
Competition amongst language model tools is increasing with the development of projects such as xxxGPT, WolfGPT, FraudGPT, and DarkBERT. However, experts at Kaspersky believe that WormGPT remains the leader among these projects.
Another concerning factor that Kaspersky identified is the demand for stolen ChatGPT accounts. Researchers discovered an additional 3,000 posts advertising these accounts on dark web forums.
Sellers on the dark web notably advertised premium ChatGPT accounts for sale with the condition that purchasers keep the account details the same and should not delete any dialogues created by the illegitimate user.
Researchers highlighted April as the peak month for the highest number of posts advertising stolen ChatGPT accounts on the dark web. March, the previous month, had the highest interest in stolen ChatGPT accounts on the dark web.
The Risk
The significant increase in the sale of ChatGPT accounts on the dark web poses a risk for companies and individuals. The most significant threat comes in the form of phishing emails. A research study by cybersecurity vendor SlashNext found that since ChatGPTs’ release in 2022, there has been a 1,265% increase in phishing emails.
Security analyst Kulishenko believes “it’s unlikely that generative AI and chatbots will revolutionize the attack landscape.” However, research highlights a significant interest in threat actors exploiting this technology for illicit use.
In light of the findings, Kaspersky recommended users “implement reliable endpoint security solutions and dedicated services to combat high-profile attacks and minimize potential consequences.”