Powered by MOMENTUM MEDIA
cyber daily logo

Breaking news and updates daily. Subscribe to our Newsletter

Breaking news and updates daily. Subscribe to our Newsletter X facebook linkedin Instagram Instagram

More than 100,000 stolen ChatGPT logins are up for sale on the dark web

Security researchers have uncovered the stolen credentials of more than 100,000 ChatGPT accounts up for sale on an illicit dark web marketplace.

user icon David Hollingworth
Wed, 21 Jun 2023
More than 100,000 stolen ChatGPT logins are up for sale on the dark web
expand image

Cyber security company Group IB identified the stolen accounts within the logs of a number of popular info-stealing malware, including Racoon (the most popular), Vidar, and RedLine. Info stealers are a common form of malware, capable of gathering banking details, website logins, browser histories and more from infected computers.

The compromised ChatGPT accounts have been appearing for sale in ever greater numbers since June 2022, when 74 were posted for sale on the dark web. That number quickly grew into the hundreds in the following months, with an impressive 26,802 going on sale in May 2023. The total figure up to May is 101,134 logins.

The issue with compromised ChatGPT accounts is that the platform stores a user’s history of prompts and responses. This could include details of software development and corporate communications, as well as other internal business processes. Many criminals are also turning to ChatGPT for everything, from modifying code to writing phishing and scam messages.

============
============

Broken down by region, most accounts come from the Asia-Pacific region, with India topping the count with 12,632 stolen accounts, followed by Pakistan. The Middle East and Africa are the next highest region, followed by Europe, Latin America, North America, and the Commonwealth of Independent States making up the remainder.

“Many enterprises are integrating ChatGPT into their operational flow,” said Dmitry Shestakov, Group-IB’s head of threat intelligence, in a blog post. “Employees enter classified correspondences or use the bot to optimise proprietary code.”

“Given that ChatGPT’s standard configuration retains all conversations, this could inadvertently offer a trove of sensitive intelligence to threat actors if they obtain account credentials.”

Given the risks posed by the generative AI engine, many companies have banned ChatGPT’s use internally, including Verizon, Apple, and Samsung, among others. Samsung, in particular, enacted the ban after its developers were found to be using ChatGPT to fix errors in internal code, which in turn made the proprietary code part of the generative AI’s dataset.

“HQ is reviewing security measures to create a secure environment for safely using generative AI to enhance employees’ productivity and efficiency,” a Samsung internal memo said in May.

“However, until these measures are prepared, we are temporarily restricting the use of generative AI.”

David Hollingworth

David Hollingworth

David Hollingworth has been writing about technology for over 20 years, and has worked for a range of print and online titles in his career. He is enjoying getting to grips with cyber security, especially when it lets him talk about Lego.

cd intro podcast

Introducing Cyber Daily, the new name for Cyber Security Connect

Click here to learn all about it
newsletter
cyber daily subscribe
Be the first to hear the latest developments in the cyber industry.