cyber daily logo

Breaking news and updates daily. Subscribe to our Newsletter

Breaking news and updates daily. Subscribe to our Newsletter X facebook linkedin Instagram Instagram

EU establishes world-first governance on AI regulation

The European Union has proposed the world’s first governance on the use of artificial intelligence (AI), setting the benchmark for the rest of the world.

user icon Daniel Croft
Tue, 12 Dec 2023
EU establishes world-first governance on AI regulation
expand image

The EU’s Artificial Intelligence Act was discussed and agreed upon last week after 37 hours of discussion, and it adopts a “risk-based approach” to the development and use of the technology.

“The EU is the first in the world to set in place robust regulation on AI, guiding its development and evolution in a human-centric direction,” said co-rapporteur Dragos Tudorache (Renew, Romania).

“The AI Act sets rules for large, powerful AI models, ensuring they do not present systemic risks to the union and offers strong safeguards for our citizens and our democracies against any abuses of technology by public authorities.


“It protects our SMEs, strengthens our capacity to innovate and lead in the field of AI, and protects vulnerable sectors of our economy. The European Union has made impressive contributions to the world; the AI Act is another one that will significantly impact our digital future.”

The key priorities of the new legislation are to protect core values and structures, such as fundamental human rights, while also encouraging developers to innovate and further the technology, “making Europe a leader in the field”.

AI guidelines vary depending on the risk different tools create. Low-risk, general-purpose AI tools “will have to adhere to transparency requirements as initially proposed by Parliament”, said the EU Parliament in a release.

“These include drawing up technical documentation, complying with EU copyright law and disseminating detailed summaries about the content used for training,” it said.

Those AI models and tools that present a greater danger (referred to by the EU as high-impact general purpose AI (GPAI) systems will require developers to perform additional safeguards such as conducting model evaluations, assessing and mitigating systemic risks, reporting to the EU on serious incidents, reports on energy efficiency and cyber security and performing adversarial testing.

Furthermore, high-risk systems will be required to meet “clear obligations”. High-risk AI tools include those with the power to impact democracy and elections.

Developers of these tools will be required to conduct a mandatory fundamental rights impact assessment, as well as other things.

The EU also banned a number of AI tools that it believes present a potential threat to democracy and the rights of EU citizens.

As listed by the EU, these include:

  • “Biometric categorisation systems that use sensitive characteristics (e.g. political, religious, philosophical beliefs, sexual orientation, race).
  • “Untargeted scraping of facial images from the internet or CCTV footage to create facial-recognition databases.
  • “Emotion recognition in the workplace and educational institutions.
  • “Social scoring based on social behaviour or personal characteristics.
  • “AI systems that manipulate human behaviour to circumvent their free will.
  • “AI used to exploit the vulnerabilities of people (due to their age, disability, social or economic situation).”

The EU outlined a number of law enforcement exemptions that would allow the use of the banned biometric identification systems (RBI).

In cases of the targeted search of a person who has committed or is suspected to have committed a serious crime, law enforcement could use post-remote RBI.

However, real-time RBI could also be used with limits to location and time in specific scenarios.

These include the targeted search of victims of abduction, trafficking or sexual exploitation, the prevention of a specific and current terrorist threat or the “localisation or identification of a person suspected of having committed one of the specific crimes mentioned in the regulation (e.g. terrorism, trafficking, sexual exploitation, murder, kidnapping, rape, armed robbery, participation in a criminal organisation, environmental crime)”.

Despite the heavy regulations, the EU has also sought to encourage businesses to adopt the technology and develop solutions without “undue pressure from industry giants controlling the value chain.

For this, the new legislation has encouraged the use of regulatory sandboxes and real-world testing for the development and training of innovative AI solutions before they hit the market.

Those who fail to adhere to the new rules can be fined between €7.5 million and €35 million, or 1.5 per cent of global turnover and 7 per cent, depending on the severity of the breach and the size of the organisation.

The new regulation is not yet law. The next stage will require the proposed legislation to be formally adopted by both Parliament and council, with the former set to vote on the document in an upcoming meeting.

Daniel Croft

Daniel Croft

Born in the heart of Western Sydney, Daniel Croft is a passionate journalist with an understanding for and experience writing in the technology space. Having studied at Macquarie University, he joined Momentum Media in 2022, writing across a number of publications including Australian Aviation, Cyber Security Connect and Defence Connect. Outside of writing, Daniel has a keen interest in music, and spends his time playing in bands around Sydney.

cd intro podcast

Introducing Cyber Daily, the new name for Cyber Security Connect

Click here to learn all about it
cyber daily subscribe
Be the first to hear the latest developments in the cyber industry.