Share this article on:
Powered by MOMENTUMMEDIA
For breaking news and daily updates,
subscribe to our newsletter.
Social media and AI giant Meta has refused to sign the code of practice the EU has outlined in its AI Act, which would require it to meet certain standards of regulation.
The EU Code of Practice for general-purpose AI (GPAI), which is a voluntary framework set to go into effect on 2 August, was published earlier this month and sets requirements for makers of general-purpose AI models, such as not using pirated content for training, providing and updating documentation regarding their AI tools and requiring them to abide by the requests of content owners to not have their data used for AI training.
In addition, the AI Act itself will define “high-risk” use cases for AI, such as with facial recognition and biometrics and regarding education and employment, and outlaw some “unacceptable risk” use cases such as behavioural manipulation.
Meta’s chief global affairs officer, Joel Kaplan, has labelled the legislation as “overreach” and said that it would harm the development and progression of the technology.
“Europe is heading down the wrong path on AI. We have carefully reviewed the [GPAI], and Meta won’t be signing it. This code introduces a number of legal uncertainties for model developers, as well as measures which go far beyond the scope of the AI Act,” said Kaplan on LinkedIn.
Kaplan added that the legislation “will throttle the development and deployment of frontier AI models in Europe, and stunt European companies looking to build businesses on top of them”.
Meta is not alone in fighting the legislation, with Kaplan pointing out that “over 40 of Europe’s largest businesses signed a letter calling for the commission to ‘Stop the Clock’ in its implementation [of the legislation]”.
Microsoft, Alphabet, Mistral AI and more have also been pushing for the EU to delay the legislation, but the commission has said that the set date will remain the same.
The EU has previously held issue with Meta and its announcement that it would train its standalone AI on the social media data of its users.
“We’re using our decades of work personalising people’s experiences on our platforms to make Meta AI more personal. You can tell Meta AI to remember certain things about you (like that you love to travel and learn new languages), and it can also pick up important details based on context,” Meta said.
“Your Meta AI assistant also delivers more relevant answers to your questions by drawing on information you’ve already chosen to share on Meta products, like your profile, and content you like or engage with.”
However, as highlighted by Kok-Leong Ong, RMIT professor of business analytics, Meta’s use of social media data to feed its AI may present a major risk.
“Meta already has a huge amount of information about its users. Its new AI app could pose security and privacy issues. Users will need to navigate potentially confusing settings and user agreements,” Ong said.
“They will need to choose between safeguarding their data versus the experience they get from using the AI agent. Conversely, imposing tight security and privacy settings on Meta may impact the effectiveness of its AI agent.”
Ong also warned that AI powered by social media could expand the spread of misinformation and harmful content.
“We have already seen Mark Zuckerberg apologise to families whose children were harmed by using social media.
“AI agents working in a social context could heighten a user’s exposure to misinformation and inappropriate content. This could lead to mental health issues and fewer in-person social interactions,” Ong said.
German data protection watchdog Verbraucherzentrale North Rhine-Westphalia ordered Meta to halt the training and requested a court injunction to prevent it from using the data.
However, the court injunction to prevent Meta from using the data was not granted by the Cologne Court.
This is despite privacy regulators from Belgium, France, and the Netherlands having already found issues with the new AI and warned users to restrict data access before the company begins the training on 27 May as part of its new privacy policy by objecting through Meta’s website.
While Meta is set to continue the training, it has made some changes, including improved transparency notices and clearer and easier opt-out forms.
Be the first to hear the latest developments in the cyber industry.