Why the EU AI Act is the New GDPR

Why Trust Techopedia

Artificial intelligence regulation is on the horizon. While the U.S. has dragged its heels on regulating the development of AI, the European Union (EU) has, for better or worse, taken a more proactive approach.

The EU AI Act is now in effect after initially being agreed upon in December 2023. Although provisions will be phased in gradually and will not arrive fully until August 2, 2026, the legislation marks the largest AI legal framework in the world.

“This landmark law, the first of its kind in the world, addresses a global technological challenge that also creates opportunities for our societies and economies,” Mathieu Michel, Belgian state secretary for digitalization, said in the official press release.

But does the EU AI Act offer sufficient protections for citizens? And is the act an example of legislative overreach which threatens the development of an emerging industry? The answer to these questions depends on who you ask.

Key Takeaways

  • The EU AI Act is the most comprehensive AI legislation to date, with phased implementation starting from August 2024 through August 2026.
  • The Act introduces significant regulations on AI use, including mandates on watermarking AI-generated images and compliance with EU copyright law.
  • There are concerns that the Act’s broad regulations might stifle innovation and push AI companies out of the European market.
  • While the Act focuses on privacy and unethical AI use, loopholes like broad allowances for biometric surveillance remain.
  • Perhaps the lack of clarity is an attempt to allow innovation while being able to curb bad actors or bad consequences.

What is the Impact of the EU AI Act on User Privacy?

One of the main selling points of the EU AI Act is that it has introduced regulations dissuading the use of AI in ways that threaten user privacy.

This includes use cases such as biometric identification systems that scrape images from the internet, emotion recognition technology in the workplace or schools, social scoring, predictive policing, and any AI that manipulates human behaviour or exploits a person’s vulnerabilities.

These controls look like a good start for dissuading unethical AI use, but they have some pretty concerning limitations. For instance, biometric authentication can still be used to “identify anyone suspected of committing a crime,” which is an incredibly broad loophole that opens the door to invasive surveillance activity.

The Act will also implement a requirement for AI vendors to denote artificially generated images or deepfakes with watermarks. This could help users to better identify synthetic content, which has been a problem over the past few years (as the recent deepfake Taylor Swift images highlighted).

How the EU AI Act Will Impact Enterprises

The EU AI Act will require organizations that use or develop AI to assess their compliance going forward, even if they’re not based in the EU.

Most notably, general purpose AI (GPAI) systems like ChatGPT and the models they’re based on must comply with EU copyright law and publish detailed summaries of the content used for training.

Likewise, solutions that generate artificial images — tools like DALL-E 3 or ImageFX — will be required to label the images with a watermark.

However, the restrictions don’t stop there. General purpose models that could pose systematic risks will also need to comply with additional requirements including model evaluations and a need to report incidents.  There are specific requirements for high risk and unacceptable risk AI systems.

Failure to comply with the act could result in significant fines ranging from 7.5 million euro ($8.1 million) to 35 million euros ($38 million), or 1% – 7% of global turnover.

James White, chief technology officer at Calypso AI told Techopedia:

“The impact of organizations operating in the EU will likely depend on how the company is using an AI model and what risk category that use case falls under, as identified by the Act.

 

“The categories — Prohibited, High Risk, and Low or No Risk — are described rather than defined and remain a bit fuzzy for cases on the edge. But this hierarchy is the core of the Act and dictates the level of low regulatory scrutiny that will be applied and the compliance requirements that must be met.”

White suggests that companies will need to assess the risk level of AI systems, strengthen their general data security and governance practices, implement ethical AI design principles, and ensure that they have compliance and incident response plans in place.

Many provisions are being phased in, so there is a short amount of time for organizations to get their affairs in order. For example, general purpose AI obligations will come into effect in August 2025 and obligations for high-risk AI systems will come into effect in August 2026.

‘Tangled Knot of Regulations’

Complying with the EU AI Act is going to be difficult, due to how broad the regulation is in terms of its restrictions on AI development.

Just as large regulations like the General Data Protection Regulation (GDPR) led to big tech companies like Meta threatening to withdraw operations from Europe, there is the potential that the EU AI Act will encourage AI vendors to pull out of the market.

That being said, the real challenge is that there are so many regulations that organizations need to comply with that it’s becoming difficult to keep up.

We’ve already seen Apple delay releasing AI products in Europe due to concerns over regulations in the region.

Dane Sherrets, a Solutions Architect at HackerOne told Techopedia:

“Many businesses are already struggling to decipher an increasingly tangled knot of regulations, including the Cyber Resilience Act and Data Act.

“While the recent EU AI Act represents a significant step towards AI safety, concerns around the additional bureaucracy it introduces have prompted demands for the European Parliament to clarify grey areas, simplify administration, and provide additional resources to support research and help small businesses understand the legislation.”

Ultimately,  such regulations could end up driving innovation elsewhere.

“Without these adjustments to the act, there are genuine concerns that the EU will be unable to establish itself as a front-runner in the field and lose out to the U.S. and China”, Sherrets concluded.

The Bottom Line

The EU AI Act appears to be on the right track in terms of its risk-based approach to AI regulation, but much more clarity is needed on precisely what organizations’ obligations are.

Perhaps the lack of clarity is an attempt to allow innovation while being able to curb bad actors or bad consequences. The EU should be applauded for making a start and taking a start, and history will judge both the short and long-term successes of its moves.

FAQs

What is the EU AI Act?

When does the EU AI Act go into effect?

When was the EU AI Act passed?

Related Terms

Related Article