THE AMERICA ONE NEWS
Jun 22, 2025  |  
0
 | Remer,MN
Sponsor:  QWIKET 
Sponsor:  QWIKET 
Sponsor:  QWIKET: Elevate your fantasy game! Interactive Sports Knowledge.
Sponsor:  QWIKET: Elevate your fantasy game! Interactive Sports Knowledge and Reasoning Support for Fantasy Sports and Betting Enthusiasts.
back  
topic
The Epoch Times
The Epoch Times
5 Jul 2023


NextImg:The Dark Side of AI: Over 100,000 ChatGPT Accounts Stolen and Traded

Criminals are targeting users of the artificial intelligence (AI) chatbot ChatGPT, stealing their accounts and trading them on illegal online criminal marketplaces—with the threat having already affected more than 100,000 individuals worldwide.

Group-IB, a Singapore-based cybersecurity firm, has identified 101,134 devices infected with information-stealing malware that contained saved ChatGPT credentials, according to a June 20 press release. “These compromised credentials within the logs of info-stealing malware traded on illicit dark web marketplaces over the past year … The Asia-Pacific region has experienced the highest concentration of ChatGPT credentials being offered for sale.”

When unassuming users interact with AI, the hidden malware captures and transfers data to third parties. Hackers can use the information collected to generate personas and manipulate data for various fraudulent activities.

Sensitive information, including personal and financial details, must never be disclosed—no matter how friendly the user gets with the AI.

Moreover, this issue is not necessarily a drawback of the AI provider—the infection could already be in the device or within other applications.

Out of the over 100,000 victims between June 2022 and May 2023, India accounted for 12,632 ChatGPT accounts, followed by Pakistan with 9,217, Brazil with 6,531, Vietnam with 4,771, and Egypt with 4,588. The United States ranked sixth with 2,995 compromised ChatGPT credentials.

“Many enterprises are integrating ChatGPT into their operational flow. Employees enter classified correspondences or use the bot to optimize proprietary code,” said Dmitry Shestakov, head of threat intelligence at Group-IB.

“Given that ChatGPT’s standard configuration retains all conversations, this could inadvertently offer a trove of sensitive intelligence to threat actors if they obtain account credentials.”

The cybersecurity firm’s analysis of criminal underground marketplaces revealed that a majority of ChatGPT accounts were accessed using the malware Raccoon info stealer, which alone was responsible for more than 78,000 of the compromised credentials.

“Info stealers are a type of malware that collects credentials saved in browsers, bank card details, crypto wallet information, cookies, browsing history, and other information from browsers installed on infected computers,” Group-IB said. It then sends all this information to the malware operator.

To minimize the risks of having ChatGPT accounts compromised, Group-IB advised users of the chatbot to regularly update their passwords as well as implement two-factor authentication (2FA). By activating 2FA, ChatGPT users will get an additional verification code to access the chatbot’s services, usually on their mobiles.

Users can enable 2FA on their ChatGPT account by going to the settings and clicking the “Data controls” option.

An engineering student takes part in a hacking challenge near Paris, on March 16, 2013. (Thomas Samson/AFP via Getty Images)

However, even though 2FA is an excellent security measure, it is not foolproof. As such, if users converse with ChatGPT on sensitive topics like intimate personal details, financial information, or anything related to work, they should consider clearing off all saved conversations.

To do so, users should go to the “Clear Conversations” section on their account and click “Confirm clear conversations.”

Group-IB pointed out that there has been a rise in the number of compromised ChatGPT accounts, mirroring the growing popularity of the chatbot.

In June 2022, there were 74 compromised accounts, per Group-IB. This jumped to 1,134 in November, 11,909 in January, and 22,597 in March.

While ChatGPT opens up a new possibility for hackers to access sensitive information, the chatbot also can aid such individuals to improve and boost their criminal activities.

In a Dec. 19 blog post, cyber threat intelligence firm Check Point Research (CPR) detailed how ChatGPT and similar AI models can create more hacking threats.

For instance, since ChatGPT aids in generating code, the application lowers the bar for coding malicious programs, thus allowing even less-skilled individuals to launch sophisticated cyber attacks.

“Multiple scripts can be generated easily, with slight variations using different wordings. Complicated attack processes can also be automated as well,” it said.

A Jan. 13 post by CPR warned about Russian cybercriminals attempting to bypass ChatGPT’s restrictions so as to use the chatbot for carrying out potential crimes.

“We are seeing Russian hackers already discussing and checking how to get past the geofencing to use ChatGPT for their malicious purposes,” the post said.

“We believe these hackers are most likely trying to implement and test ChatGPT into their day-to-day criminal operations. Cybercriminals are growing more and more interested in ChatGPT because the AI technology behind it can make a hacker more cost-efficient.”