A major international AI treaty designed by The Council of Europe is expected to be signed by the US, UK, and EU.
The landmark accord will emphasize human rights and democratic values, setting out a legal framework covering the entire lifecycle of AI systems and programs, from their inception to proliferation.
The treaty is separate from the EU’s AI Act because it is open to states and jurisdictions outside its membership. Other countries such as Australia, Canada, and Japan have pledged their consent after more than 50 nations contributed to drafting the agreement.
As reported by the FT, Peter Kyle, the UK’s minister for science, innovation, and technology, welcomed the international agreement, and the importance of reaching this first step globally.
Kyle added the treaty is the first to have “real teeth” but some critics have poured scorn on the lack of sanctions that can be imposed as part of the “legally enforceable” deal. There are no financial penalties on the table as a deterrent or punishment for those entities that do not adhere to the terms of the treaty.
Compliance will be measured largely via monitoring, which could undermine the intentions and purpose of the agreement, unless rigorously enforced.
The Council of Europe treaty is set to apply three months after five signatories (including three Council member states) ratify the accord.
Europe’s AI Act is going global.
I welcome today’s signature of the Council of Europe Framework Convention on AI.
In line with our AI Act, it provides a common approach for trustworthy, innovative AI compatible with democratic values.
The EU will continue to champion…
— Ursula von der Leyen (@vonderleyen) September 5, 2024
Other Significant Agreements On AI Standards
AI regulation is a hot topic at present, with various governments and industry representatives seeking consensus on how to address the global challenge.
The EU’s AI Act is the first wide-reaching regional law, although it will apply to all companies who wish to operate in the jurisdiction. It has been described as “the new GDPR” due to the scope of regulations that organizations need to comply with. This has led to pushback and the likes of Apple delaying launches in Europe due to concerns. The same could apply to AI vendors.
There have been frameworks drawn up by the G7 nations and the signatories to the Bletchley Declaration, while the US is yet to pass a national deal on AI. Despite the lack of movement in Congress, California’s State Assembly overwhelmingly approved a controversial bill, known as SB 1047, which would compel tech giants like Microsoft and OpenAI to put safety measures in their systems before releasing them to the public.