What the New EU AI Act Means: Expert Analysis

Why Trust Techopedia

The European Union (EU) has taken a landmark step towards regulating artificial intelligence (AI) with the official passage of the AI Act.

The legislation, the first of its kind from a major global regulator, aims to establish a framework for the development, deployment, and use of AI within the EU.

Unlike the USA and UK, which are taking their time releasing an enforceable regulation on AI development and deployment, the EU Parliament lawmakers have tied enforceability to its AI guidelines three years after the first bill was read on the floor of the EU parliament.

The Act has generated significant interest, with many wondering how it will impact businesses, consumers, and the future of AI development.

Understanding what exactly the Act implies and the specific requirements for different AI applications is crucial for businesses and AI developers.

In addition to highlighting the specifics, we spoke to expert analysts to sample their views.

Advertisements

The Scope of the AI Act

The AI Act, which will become law once rubber-stamped by the EU’s member states — expected in early 2025 — categorizes AI applications into three risk groups: unacceptable risk, high risk, and minimal risk.

Unacceptable-risk applications, such as social scoring systems that discriminate against certain demographics, will be banned completely. High-risk applications, which include AI used in facial recognition, credit scoring, financial services, and recruitment, will face stricter regulations.

These categorization lines may leave enterprise AI consumers and developers worried about how to use AI in the ‘high risk’ category. AI developers will be required to implement robust risk management systems, ensure data quality and fairness, and provide clear information about how the AI system works.

The Act goes beyond just risk categorization. It mandates developers to conduct thorough human oversight for high risk applications, ensuring that algorithmic decisions can be reviewed and potentially overridden by humans.

Additionally, the Act emphasizes transparency, requiring developers to provide clear documentation on how AI systems arrive at their decisions. This will help mitigate bias and ensure fairness in AI-driven outcomes.

Speaking to Techopedia on the scope of the AI regulation, Alois Reitbauer, Chief Technology Strategist, Dynatrace, expressed his worries over how these acts will be enforced.

For him, the act does not state in clear terms what constitutes an AI model and thus will leave a lot of bumpy areas during enforcement.

“One of the biggest considerations that must be addressed quickly is how the regulation will be enforced.

 

“It’s impossible to see how organizations will be able to comply if they aren’t first clear on what constitutes an AI model, so the EU will first need to ensure that has been clearly defined.

 

“For example, will machine learning used in our mobile phones or connected thermostats be classed as an AI system?”

Experts Weigh in on What Businesses Can Do to Meet Compliance

A cursory look into the AI Act shows that businesses within the EU block — and their partners outside the EU — will be caught in a labyrinth of governance and compliance hurdles.

Global law firm, Hogan Lovells, warns businesses of the need to brace for impact as the AI Act “provides for a wide range of governance and compliance obligations” that could apply to any organization involved with artificial intelligence systems, not just the developers.

“To ensure and be able to demonstrate compliance with the future obligations under the AI Act, and to avoid risks and liabilities, it is essential for organizations to start evaluating the impact that the AI Act will have on its operations,” the analysts wrote.

They advise that complying with the AI Act should be part of a comprehensive AI governance program that verifies “appropriate standards, policies, and procedures for an appropriate use of AI technology.”

On how this development could affect American business, Jonas Jacobi, CEO of ValidMind told Techopedia that despite not knowing the full scope of the act, it is not a new terrain for US businesses in terms of compliance.

However, he warned that small and mid-sized businesses will need to have their fingers on the pulse:

“While we don’t know the full scope of how the EU AI Act will affect American businesses, it’s clear that in order for enterprise companies to operate internationally, they’re going to have to adhere to the Act.

“That will be nothing new for many. Large American corporations that operate globally are already navigating complex regulatory environments like the GDPR, often choosing to apply these standards universally across their operations because it’s easier than having one set of rules for doing business domestically and another set of rules internationally.

“Small and midsize companies who are implementing or thinking about an AI strategy should stay informed and vigilant.”

In his reaction, Neil Serebryany, Founder and CEO at CalypsoAI, highlighted initial cost and complexities as major hurdles that will come with the compliance. He told Techopedia:

“While the Act includes complex and potentially costly compliance requirements that could initially burden businesses, it also presents an opportunity to advance AI more responsibly and transparently. Ultimately, this will build greater consumer and stakeholder trust and facilitate sustainable long-term adoption.”

Despite the grace period — generally two years, but one year for high-risk ventures, and six months for ‘unacceptable risk’ ventures — offered to businesses to lace up their compliance gears, Director of AI Programs and Co-Founder at Cranium, Daniel Christman told Techopedia that companies will struggle to meet the varying compliance thresholds.

“Companies will struggle with determining whether their AI use case meets different compliance thresholds. For example, one of the risk classifications is ‘high-impact models with systemic risk’, which is determined by whether the model was trained with more than 10^25 FLOPs (floating-point operations).”

In addition, Christman questioned the rationale behind leaving out red teaming in the Act, stating that “lack of red teaming in the EU AI Act can create a myriad of security and safety issues, as it has been proven time and time again that basic safeguards can be circumvented.”

The AI Regulation Landscape So Far

Despite the undeniable benefits of AI, concerns have grown regarding its potential misuse and negative societal impacts. Issues like algorithmic bias, privacy violations, and the lack of transparency in AI decision-making processes have fueled the debate around the need for regulations.

In response, governments and international bodies around the world have begun exploring ways to regulate AI. In our recent report, we covered the efforts being made by the UK and US governments to provide comprehensive AI security guidelines. Last October, the U.S. President, Joe Biden, in an Executive Order, further called for more transparency in AI development. China is not sleeping on the matte either: it released an AI governance framework back in 2022.

This EU AI regulation builds upon this foundation, establishing the first-ever comprehensive legal framework for AI, and offers a potential global standard for responsible AI development and deployment for the future.

Unlike the regulations touted by the US and UK, Adnan Masood, Chief AI Architect at UST told Techopedia that the EU Act will have more influence on other global AI regulations as it places more compliance responsibilities on the shoulders of developers than consumers.

“The act’s approach of placing responsibility on the shoulders of application developers is a pivotal shift that could significantly influence U.S. regulations and the global AI landscape.”

The Bottom Line

The relentless pace at which AI models now crisscross the internet requires the implementation of safeguards to promote ethical development and responsible use of the technology. However, it is imperative to strike a balance in a way that does not impede innovations within the field.

The EU’s AI Act shines a torch on the path towards ensuring the responsible development and use of AI.

While challenges lie ahead for businesses navigating the new regulatory landscape, the Act presents a unique opportunity to build trust in AI technologies and pave the way for a future where ethical AI development is not sacrificed on the altar of profitability.

Advertisements

Related Reading

Related Terms

Advertisements
Franklin Okeke
Technology Journalist
Franklin Okeke
Technology Journalist

Franklin Okeke is an author and tech journalist with over seven years of IT experience. Coming from a software development background, his writing spans cybersecurity, AI, cloud computing, IoT, and software development. In addition to pursuing a Master's degree in Cybersecurity & Human Factors from Bournemouth University, Franklin has two published books and four academic papers to his name. His writing has been featured in tech publications such as TechRepublic, The Register, Computing, TechInformed, Moonlock and other top technology publications. When he is not reading or writing, Franklin trains at a boxing gym and plays the piano.