You have 0 free articles left this month.
Register for a free account to access unlimited free content.
Powered by MOMENTUM MEDIA
lawyers weekly logo

Powered by MOMENTUMMEDIA

Breaking news and updates daily. Subscribe to our Newsletter
Advertisement

Meta’s standalone ChatGPT rival uses your social media data

Meta has launched a rival standalone chatbot to OpenAI’s ChatGPT, allowing users to access the Meta AI app outside of Facebook Messenger and other Meta applications.

Meta's standalone ChatGPT rival uses your social media data
expand image

Announced at the company’s LlamaCon event earlier this week, the Meta AI differentiates itself from other AI chatbots like ChatGPT and DeepSeek by drawing on information it already knows about users through their social media accounts for years and years.

“We’re using our decades of work personalising people’s experiences on our platforms to make Meta AI more personal. You can tell Meta AI to remember certain things about you (like that you love to travel and learn new languages), and it can also pick up important details based on context,” said Meta.

“Your Meta AI assistant also delivers more relevant answers to your questions by drawing on information you’ve already chosen to share on Meta products, like your profile, and content you like or engage with.”

However, as highlighted by RMIT Professor of Business Analytics Kok-Leong Ong, Meta’s use of social media data to feed its AI may present a major risk.

“Meta already has a huge amount of information about its users. Its new AI app could pose security and privacy issues. Users will need to navigate potentially confusing settings and user agreements,” said Ong.

“They will need to choose between safeguarding their data versus the experience they get from using the AI agent. Conversely, imposing tight security and privacy settings on Meta may impact the effectiveness of its AI agent.”

Ong also warns that AI-powered by social media could expand the spread of misinformation and harmful content.

“We have already seen Mark Zuckerberg apologise to families whose children were harmed by using social media.

“AI agents working in a social context could heighten a user’s exposure to misinformation and inappropriate content. This could lead to mental health issues and fewer in-person social interactions,” said Ong.

Alongside amplifying misinformation, social media AI trends, such as the recent Barbie doll trend, may also be intensified by social media-fed AI.

Daniel Croft

Daniel Croft

Born in the heart of Western Sydney, Daniel Croft is a passionate journalist with an understanding for and experience writing in the technology space. Having studied at Macquarie University, he joined Momentum Media in 2022, writing across a number of publications including Australian Aviation, Cyber Security Connect and Defence Connect. Outside of writing, Daniel has a keen interest in music, and spends his time playing in bands around Sydney.
You need to be a member to post comments. Become a member for free today!

newsletter
cyber daily subscribe
Be the first to hear the latest developments in the cyber industry.