This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.
EU AI Act approved by European Parliament: a new era of AI regulation
The European Parliament has approved a comprehensive framework to constrain artificial intelligence’s perceived risks and threats with the AI Act.
The AI Act is one of the world’s first AI frameworks. Politicians worldwide are looking to grasp the opportunities offered by the explosion of generative AI while allaying fears about bias, privacy, and a potential existential threat to humanity itself.
EU lawmakers gave the AI Act final approval on Wednesday, with 523 votes in favour, 46 against, and 49 not cast.
“Europe is now a global standard-setter in AI,” Thierry Breton, the European Commissioner for the Internal Market, wrote on X.
🇪🇺 Democracy: 1️⃣ | Lobby: 0️⃣
I welcome the overwhelming support from European Parliament for our #AIAct —the world’s 1st comprehensive, binding rules for trusted AI.
Europe is NOW a global standard-setter in AI.
We are regulating as little as possible — but as much as needed! pic.twitter.com/t4ahAwkaSn
— Thierry Breton (@ThierryBreton) March 13, 2024
What is the AI Act?
The AI Act aims to regulate AI based on its capacity to harm people, with higher-risk applications facing stricter regulations.
This includes banning any applications deemed to pose a “clear risk to fundamental rights.”
Strict regulations will be applied to AI systems used in critical infrastructure, education, healthcare, law enforcement, or democracy, which are deemed “high risk.”
Lawmakers said most services will likely fall into the “low risk” category, facing the lightest regulation. These include use cases such as content recommendation systems or spam filters.
The AI Act has been in development since 2020. Early drafts focused on limited AI systems, such as those automating small tasks like document scanning.
However, the release of ChatGPT in 2022 and the sudden boom in AI development forced the European Parliament and EU lawmakers to overhaul and accelerate regulatory plans.
The new laws mean that companies developing genAI models, such as OpenAI and Google, will need to provide detailed summaries of data taken from the Internet and used to train their systems.
Any deepfake images, audio, or video generated by AI must be suitably labelled as artificially manipulated.
However, the EU intends to support innovation and the adoption and development of AI by SMEs.
The AI Act mandates the establishment of regulatory sandboxes and real-world testing at the national level. These will be accessible to SMEs and start-ups so they can develop and train innovative AI before its release on the market.
Dragos Tudorache, Civil Liberties Committee co-rapporteur, said: “The EU has delivered. We have linked the concept of artificial intelligence to the fundamental values that form the basis of our societies.”
“However, much work lies ahead that goes beyond the AI Act itself. AI will push us to rethink the social contract at the heart of our democracies, our education models, our labour markets, and the way we conduct warfare. The AI Act is a starting point for a new model of governance built around technology. We must now focus on putting this law into practice.”
Other countries have also implemented AI regulations. In 2022, China passed three measures on the national, regional, and local levels. It also enforced restrictions on deepfakes last year.
US President Joe Biden signed an executive order forcing AI developers to share information with the US government. The UK has several AI laws in place but no overarching framework.
UK, US, EU, and China sign Bletchley declaration warning of AI danger
Reaction
Industry experts generally welcomed the European Parliament’s approval of the AI Act, acknowledging that regulation can help various sectors adopt AI while also protecting citizens.
However, they cautioned that regulators must maintain a delicate balance between the need for regulation and the potential for innovation.
Greg Hanson, GVP and head of sales EMEA North at Informatica said: “Final approval of the EU’s AI Act will resonate far beyond the region’s borders. What’s clear is that large, multi-national organisations will not be able to afford to do AI regulation on a siloed project-by-project, country-by-country basis. It is too complex.”
“Instead, organisations will need to consider how AI regulation translates into policy and put solid foundations in place that can be easily adapted for individual regions. For example, regulators across countries are showing an appetite for transparency.”
Curtis Wilson, staff data scientist at the Synopsys Software Integrity Group, said that regulatory frameworks such as the AI Act are “an essential component in building trust in AI.”
“The greatest problem facing AI developers is not regulation, but a lack of trust in AI. For an AI system to reach its full potential, it needs to be trusted by the people who use it,” he added.
“The Act itself is mostly concerned with regulating high-risk systems and foundational models. However, many of the requirements already align with data science best practices, such as risk management, testing procedures, and thorough documentation. Ensuring that all AI developers adhere to these standards is to everyone’s benefit.”
#BeInformed
Subscribe to our Editor's weekly newsletter