Council of Europe’s AI Treaty: Prospects & Pitfalls

Why Trust Techopedia

Driven by industry competitiveness, the need for speed regarding AI innovation isn’t letting up as Big Tech continues to invest heavily in AI programs. The implications of this rapid evolution are bittersweet.

Yes, AI is transforming many essential social institutions, such as healthcare and education, but it’s also becoming increasingly embroiled in criminality. Deepfakes, misinformation, and privacy violations have reached dismal pervasiveness, confirming the need for meaningful regulation and legislation.

After years of drafting and negotiation, the Framework Convention was adopted by the Council of Europe on 17 May 2024. While industry professionals broadly support regulation, there has been considerable pushback against what many think is an injudicious assessment of the risks and rewards that could impede innovation.

In this article, we explored what experts say about the current state of AI regulation.

Key Takeaways

  • The Framework Convention is the first legally binding international treaty on AI regulation.
  • The treaty acknowledges AI’s societal benefits but stresses the need to guard against risks like discrimination, privacy violations, and misuse for repressive purposes.
  • Experts express concerns that current regulatory measures could hinder innovation and create an uneven competitive landscape.
  • Smaller AI companies may struggle with complex compliance requirements, which could disproportionately impact their ability to innovate.

Key AI Regulatory Acts Today

AI Convention: A Major Milestone in AI Governance

After years of drafting and negotiation, the Framework Convention was adopted by the Council of Europe on 17 May 2024.

It became officially open for signature on 5 September 2024 at the Conference of Ministers of Justice in Vilnius, where the EU and the UK were among the first to sign.

Advertisements

The UK’s signatory, Lord Chancellor Shabana Mahmood, stated:

“Artificial Intelligence has the capacity to radically improve the responsiveness and effectiveness of public services and turbocharge economic growth.”

This positive outlook is reflected in the treaty itself, which acknowledges AI’s ability to promote, among other things, “individual and societal wellbeing, sustainable development, gender equality, and the empowerment of all women and girls.”

However, it’s not all sunshine and rainbows.

The treaty also highlights how AI can undermine human dignity and autonomy, exacerbate discrimination and inequality, and be used for repressive purposes.

Recognizing the tech’s dark potential, the Lord Chancellor warned:

“We must not let AI shape us—we must shape AI. This convention is a major step to ensuring that these new technologies can be harnessed without eroding our oldest values, like human rights and the rule of law.”

The treaty follows in the footsteps of the Bletchley Declaration, which was also hailed as a groundbreaking international effort to combat AI’s potential risks.

However, the Framework Convention is not a non-binding commitment; once ratified, adherence will be compulsory. And what exactly are the signatory countries signing up to?

Major Principles

Three key safeguards underpin the Framework Convention:

  • Protecting human rights: Ensuring personal data is handled responsibly, privacy is upheld, and AI systems do not engage in discriminatory practices.
  • Protecting democracy: The Requirement for countries to take measures to prevent AI from undermining public institutions and democratic processes.
  • Protecting the rule of law: The signatory nations are obliged to regulate AI-related risks, protect citizens from potential harm, and ensure the safe use of AI.

Crucially, the treaty attempts to balance the promotion of innovation and the mitigation of irresponsible use.

As Secretary of State for Science, Innovation, and Technology Peter Kyle said, AI’s full transformative potential can only be reached if “people have faith and trust in the innovations which will bring about that change.”

Balancing Innovation & Accountability: Experts Views

Gary Marcus: Advocating for Ethical Standards & Transparency in AI

In July 2024, renowned cognitive scientist Gary Marcus spoke candidly at the AI for Good Innovate for Impact event in Shanghai about the need for strong ethical standards and robust regulatory frameworks within the development of AI.

Having warned that the priorities of tech titans don’t always align with humanity’s interests, Marcus stated, “We shouldn’t be letting the big tech companies decide everything for humanity.”

Complete transparency was at the heart of Marcus’ appeal:

“We need full accounting of what data is used to train models, full accounting of all AI-related incidents as they affect bias, cybercrime, election interference, market manipulation, and so forth.”

While treaties like the Framework Convention are expected to alleviate such problems, several industry and legal professionals recognize the potential threat they pose to innovation.

Kate Deniston & Louise Lanzkron: Concerns Over Varying Regulatory Interpretations

Kate Deniston and Louise Lanzkron from the international law firm Bird & Bird suggest that the treaty’s broad and flexible principles could result in varying interpretations and applications across different countries.

It seems likely that inconsistent regulatory standards could easily lead to an unlevel playing field that would hinder innovation in certain signatory states.

Mark Zuckerberg & Daniel Ek: Warning Against Pre-emptive Regulation

In a recent article, Mark Zuckerberg Meta Founder and CEO, and Daniel Ek, Spotify Founder and CEO, shared views about what they perceive to be Europe’s desire to restrain innovation through regulation:

“Regulating against known harms is necessary, but pre-emptive regulation of theoretical harms for nascent technologies such as open-source AI will stifle innovation. Europe’s risk-averse, complex regulation could prevent it from capitalizing on the big bets that can translate into big rewards.”

Mark Zuckerberg, Meta Founder and CEO, and Daniel Ek, Spotify Founder and CEO
Mark Zuckerberg, Meta Founder and CEO, and Daniel Ek, Spotify Founder and CEO. Source: Meta

Dr. Fei-Fei Li: Criticism of SB-1047’s Impact on AI Innovation

California Senate Bill SB-1047, also known as the “Safe and Secure Innovation for Frontier Artificial Intelligence Models Act,” has received similar pushback for its perceived harmful effect on innovation.

Dr. Fei-Fei Li, widely recognized as the “Godmother of AI,” said:

“AI policy must encourage innovation, set appropriate restrictions, and mitigate the implications of those restrictions. Policy that doesn’t will at best fall short of its goals, and at worst lead to dire, if unintended, consequences.”

Li believes that SB-1047 “will harm our budding AI ecosystem” and “will unduly punish developers and stifle innovation.”

Andrew Ng: Pushback on Vague Reporting Requirements in SB-1047

Similarly, computer scientist and technology entrepreneur Andrew Ng has consistently criticized SB-1047.

In a recent tweet, he protested against the Bill’s vague and ambiguous reporting and certification requirements. After describing how ridiculous it is to expect developers to anticipate the potential harm that their AI might cause when leading researchers struggle with ascertaining future impacts, NG concludes that:

“This creates a scary situation for developers. Committing perjury could lead to fines and even jail time. Some developers will have to hire expensive lawyers or consultants to advise them on how to comply with these requirements.”

Smaller Companies: Struggling With Complex Regulatory Frameworks

While Marcus’s observations about tech titans’s accountability are timely and should be carefully considered, perhaps it is not big tech that the latest AI regulations will hit the hardest.

Complex legal frameworks, overregulation, and imposing strict and often confusing standards could certainly stifle innovation—especially if you’re a smaller company with fewer resources—rendering the simple act of compliance unfairly complex.

The Bottom Line

Despite the good intentions behind rulemaking, critics of current measures view complex frameworks and general overregulation as potential enemies of innovation.

However, in the face of rapid advancement, the Framework Convention certainly marks a significant step toward viable worldwide regulation.

Its focus on protecting human rights, democracy, and the rule of law is admirable, but balancing innovation and regulation will always be a delicate task.

FAQs

What is the European Council AI convention?

What is the current status of the EU AI Act?

What is the AI policy in the EU?

What is the EU AI Act in May 2024?

Advertisements

Related Reading

Related Terms

Advertisements
John Raspin
Technology Journalist
John Raspin
Technology Journalist

John Raspin spent eight years in academia before joining Techopedia as a technology journalist in 2024. He holds a degree in Creative Writing and a PhD in English Literature. His interests lie in AI and he writes fun and authoritative articles on the latest trends and technological advancements. When he's not thinking about LLMs, he enjoys running, reading and writing songs.