After three years of negotiations, the first AI law, the E.U. AI Act, has finally received the final approval. But what does the law’s future promise? Will other countries follow Europe’s legislative standards? Does the act impact innovation, and is it a sufficient resource to combat the risks and threats of the AI era?
Techopedia talked with Gemma Galdón-Clavell, an advisor to the United Nations and E.U. on applied ethics and responsible AI, as well as to other experts to answer these and other questions.
Key Takeaways
- The EU AI Act, recently approved, is about to become law and will set the standards for how AI systems are developed and used.
- The Act emphasizes transparency, accountability, and reducing bias in AI development and use, and has been met with both positive and negative reactions.
- The Act is expected to be a model for other countries around the world and aims to ensure fairness as AI enters our lives at a pace that regulations find hard to match.
E.U. Takes Steps to Enforce the AI Act
On May 21, Moody’s Analytics reported that the E.U. is already taking steps to enforce AI regulations. The final approval of the E.U. AI Act by the European Council, also on May 21, set in motion a set of measures.
The final version of the Act is expected to be published soon in the Official E.U. Journal. The act will enter into force 20 days after this publication.
Simultaneously, the E.U. is creating a set of government authorities to enforce the act, including the AI Office within the European Commission (EC), which will enforce the common rules across the region.
A scientific panel of independent experts will also be created to help with enforcement activities, as well as an AI Board with member states’ representatives who will advise and assist the newly created EC and member states with consistent and effective application of the AI Act. The law will also create another advisory forum will be composed of stakeholders who will provide technical expertise.
Galdón-Clavell, who is also a fellow with Northeastern University’s Institute for Experiential AI and Founder & CEO of the Eticas Foundation, explained that what is driving this new legislation is that there’s enough evidence that facial recognition — like remote biometrics and emotion recognition — doesn’t work nearly as well as it should.
“What I hope this will do is send a message to AI developers that they need to do better work in fair and more accountable ways before we encourage deployment in the wild.”
The Final E.U. AI Act Version: Why It Matters
The E.U. AI Act project began formally in April 2021, when the European Commission submitted its proposal for a regulatory framework on AI. However, since then much has happened in the AI field such as the rise of generative AI, more powerful large language model (LLM) models, and multimodal AI systems.
The progress of AI has also proven to have a dark side, as cybercriminals reverse engineer LLMs and other types of AI technologies to carry out faster and more efficient large-scale attacks.
However, the European Parliament has not been blindsided by the progress of AI. The December 2023 revised proposal gained momentum and political agreement, shaping the final version of the AI Act.
Matthijs de Vries, founder of Netherlands-based AI company Nuklai, speaking to Techopedia, broke some of the main points of the new law.
“The EU AI Act emphasizes comprehensive transparency, particularly through Article 10, which mandates that high-risk AI systems implement robust data governance measures.
“One crucial requirement is the transparency regarding the original purpose of data collection,” de Vries said. “This means that if data used for training, validation, or testing was initially collected for another purpose, this original intent must be disclosed.
“This requirement helps in maintaining clarity and accountability, ensuring that users are fully aware of how and why their data is being used.”
The E.U., through this new act, also recognizes the importance of AI system training and what data is used during this stage.
“The disclosure of data collection, processing, and usage practices mandated in the E.U. data act helps demystify what data AI is trained and what data it uses to make its statements and conclusions, thereby enhancing trust, verifiability, and reliability of AI systems,” de Vries said.
In this sense, the E.U. Act is designed to protect personal information, particularly in sensitive sectors such as healthcare and finance.
“By ensuring that AI systems comply with stringent data usage protocols, the Act helps safeguard consumer privacy and security.”
From autonomous cars with a detection bias problem to U.S. states worrying about AI discrimination in the insurance sector, banking industry AI system failures, inaccurate AI image generators, and more; bias and discrimination are among the top risks of AI.
Galdón-Clavell from Eticas Foundation spoke to Techopedia about the issue.
“Not only must these systems improve in their ability to properly identify people, but also to make sure that they identify people in the same contexts and rates, regardless of skin color, clothing worn, or any other characteristics.”
“The EU AI Act is a tool by which we can tell the companies behind those technologies what they need to focus on if they want to put their technology to broad use,” Galdón-Clavell said.
E.U. AI Is Getting A Bad Rep, But Does It Deserve It?
Just days after the final approval, the E.U. AI Act is already getting a bad reputation. On May 24, CNN reported that executives from Amazon and Meta believe the risks or “fears” presented by the E.U. AI Act are “overblown” and that the AI act risks holding back innovation.
In Europea, Euronews reported that research by Copenhagen Economics found that there are no “immediate competition concerns” in Europe’s generative AI scene that would warrant regulatory intervention. The report adds the AI Act is premature, will slow down innovation and growth, and reduce consumer choice in generative AI.
Despite warnings from big tech and scientific communities, countless organizations, businesses, and individuals, value the AI Act and think it can bring meaningful change. Justin Daniels, Faculty at IANS Research told Techopedia that the E.U. AI Act will impact innovation or slow down businesses.
“AI innovation and spending will continue at breakneck speed, regardless of regulations. Most firms look at the opportunity of AI to increase efficiency and do not want to be left behind by competitors.
“The EU law is designed to regulate use cases, not AI itself. Regulatory guardrails are important as without them, companies will have no incentive to focus on appropriate risk. Social media is a good example of what happens without these guardrails.”
Recognizing the Win and Value of Having High Standards
Galdón-Clavell from Eticas said that as someone who runs a company that conducts AI audits, she sees the Act as a positive.
“It gives companies some certainty of what is expected of them in terms of the impact of their technologies, their accuracy, and lack of biases.”
“In some ways, it mirrors the way pharmaceutical companies develop and market medications,” Galdón-Clavell said. “They cannot sell a drug broadly until it has been proven sufficiently safe and effective in clinical trials.
“I think that’s something that we all celebrate: that there are mechanisms to ensure that whenever something hits the market, it does so under safe conditions.”
Expect AI Laws To Emerge Worldwide: The GDPR Effect
Another expected consequence of the E.U AI Act is for it to follow the example of the GDPR adopted by the region in 2016. Since the GDPR was approved, numerous countries passed laws entirely based on the GDPR, inspired by it, or based on it.
The United Nations Trade and Development (UNCTAD) explains that today 137 out of 194 countries have legislation for the protection of data and privacy worldwide.
“As it happens, China already has a similar law in place. Singapore has guidelines that mirror what the E.U. just developed, and the US released an executive order on AI late last year,” Galdón-Clavell said.
“There’s a global effort to regulate these technologies and bring them up-to-speed in terms of accountability.”
As Galdón-Clavell explained as we have seen with the GDPR — European regulations that focus on innovations linked to the digital world — become gold standards because of the absence of previous laws.
“Many countries that may not have the institutional capacity to develop their own laws rely on what the European Union has done,” Galdón-Clavell said.
“I’m certain that the next few years will be marked by an increase in the obligations of these companies to ensure that whatever AI product is put on the market does so under conditions of trust and safety. As it pertains to AI, specifically, trust will be essential.”
The Bottom Line
As Galdón-Clavell added, it is “AI (who will) decide whether you get a mortgage or a job or a medical treatment”.
It can have a significant impact on many aspects of our lives, and regulations can help us avoid potential problems and ensure that AI benefits society.
The European AI Act is based on human rights and was approved precisely for this purpose: to ensure that the technology impacting the real lives of people everywhere seeks to ensure fairness and protect rights. Without laws like these, fair and just progress cannot be achieved.