You have 0 free articles left this month.
Register for a free account to access unlimited free content.
Powered by MOMENTUM MEDIA
lawyers weekly logo

Powered by MOMENTUMMEDIA

For breaking news and daily updates, subscribe to our newsletter.
Advertisement

Meta patches AI chatbot bug capable of leaking user AI prompts and responses

A bug in Meta’s AI chatbot that allowed users to access the prompts and responses of other Meta AI users has been patched.

Meta patches AI chatbot bug capable of leaking user AI prompts and responses
expand image

Founder of security and penetration testing firm AppSecure and researcher Sandeep Hodkasia discovered the bug during research into how Meta AI allows users to edit their prompts. The AI does this by assigning prompts and responses a unique number.

Speaking with TechCrunch, Hodkasia found that he was able to change the unique number when editing an AI prompt and then receive the prompt and AI response of another user, after monitoring network traffic.

This presents a major security risk, particularly for individuals that may have entered personal information into the chatbot such as for tips, writing resumes or other tasks chatbots are often used for that may use personal data.

 
 

Making matters worse is that Hodkasia said the unique numbers were “easily guessable”, which could provide threat actors an easy way to scrape prompts and responses from users with an automated tool.

The data could then potentially be used to blackmail an individual, or be used in combo lists for later sale and misuse, such as in phishing attacks.

Meta told TechCrunch that the bug had been patched, with company spokesperson Ryan Daniels adding that it had “found no evidence of abuse and rewarded the researcher” for discovering the bug.

Hodkasia said he was paid US$10,000 for discovering the big, which has since been patched.

Concerns have also been raised regarding Meta’s standalone AI, which may be at risk of spitting out personal data after the company announced that it would be training the chatbot on the data of its social media users.

“We’re using our decades of work personalising people’s experiences on our platforms to make Meta AI more personal. You can tell Meta AI to remember certain things about you (like that you love to travel and learn new languages), and it can also pick up important details based on context,” said Meta.

“Your Meta AI assistant also delivers more relevant answers to your questions by drawing on information you’ve already chosen to share on Meta products, like your profile, and content you like or engage with.”

However, as highlighted by RMIT Professor of Business Analytics Kok-Leong Ong, Meta’s use of social media data to feed its AI may present a major risk.

“Meta already has a huge amount of information about its users. Its new AI app could pose security and privacy issues. Users will need to navigate potentially confusing settings and user agreements,” said Ong.

“They will need to choose between safeguarding their data versus the experience they get from using the AI agent. Conversely, imposing tight security and privacy settings on Meta may impact the effectiveness of its AI agent.”

Ong also warns that AI-powered by social media could expand the spread of misinformation and harmful content.

“We have already seen Mark Zuckerberg apologise to families whose children were harmed by using social media.

“AI agents working in a social context could heighten a user’s exposure to misinformation and inappropriate content. This could lead to mental health issues and fewer in-person social interactions,” said Ong.

German data protection watchdog, the Verbraucherzentrale North Rhine-Westphalia (NRW), ordered Meta to halt the training and requested a court injunction to prevent it from using the data.

However, the court injunction to prevent Meta using the data was not granted by the Cologne Court.

This is despite privacy regulators from Belgium, France, and the Netherlands having already found issue with the new AI and warned users to restrict data access before the company begins the training on 27 May as part of its new privacy policy by objecting through Meta’s website.

While Meta is set to continue the training, it did make some changes, including improved transparency notices and clearer and easier opt-out forms.



Daniel Croft

Daniel Croft

Born in the heart of Western Sydney, Daniel Croft is a passionate journalist with an understanding for and experience writing in the technology space. Having studied at Macquarie University, he joined Momentum Media in 2022, writing across a number of publications including Australian Aviation, Cyber Security Connect and Defence Connect. Outside of writing, Daniel has a keen interest in music, and spends his time playing in bands around Sydney.
You need to be a member to post comments. Become a member for free today!

newsletter
cyber daily subscribe
Be the first to hear the latest developments in the cyber industry.