Sixteen companies have signed up to a collective, voluntary agreement on artificial intelligence (AI) safety standards at an industry summit at Bletchley Park, England.
British Prime Minister Rishi Sunak confirmed the list of Big Tech firms and AI developers with the signatories agreeing to work together and share information, as well as investing in cybersecurity and giving priority to risks posed by the technology.
Sunak claimed the “precedent for global standards” on AI would provide “transparency and accountability” and help to accelerate benefits in the rapidly evolving landscape.
Amazon, Google, Meta, Microsoft, OpenAI, and Samsung are among the list of established and reputable companies to be represented and signed up to the charter known as the Frontier AI Safety Commitments, with talks continuing at a follow-up event in Seoul this week co-chaired by Sunak and South Korean President Yoon Suk Yeol.
China’s Zhipu and the United Arab Emirates’ Technology Innovation Institute are also represented within the group of companies, adding more an international element beyond the U.S.- and Europe-dominated alliance.
The UK government is said to believe its “light touch” approach to AI regulation is vindicated with buy-in from some countries that would have otherwise been hesitant to bind companies to a collective agreement.
OpenAI said the standards represented “an important step toward promoting broader implementation of safety practices for advanced AI systems,” with Anna Makanju, the company’s vice-president for global affairs, adding “the field of AI safety is quickly evolving and we are particularly glad to endorse the commitments’ emphasis on refining approaches alongside the science.”
However, the consensus was not shared by other players and figures across the AI spectrum.
Canadian computer scientist Yoshua Bengio, known as a “godfather of AI,” welcomed the commitments but warned any voluntary commitments do not go far enough as safeguards and would have to be accompanied by regulation.
That sentiment was shared by Fran Bennett, interim director of the Ada Lovelace Institute, who stated more bite was required.
“People thinking and talking about safety and security, that’s all good stuff. So is securing commitments from companies in other nations, particularly China and the UA, but companies determining what is safe and what is dangerous, and voluntarily choosing what to do about that, that’s problematic.
Bennett argued “now you need some teeth to it” with regulation and responsibility so that institutions can draw a line between the companies they represent and the people impacted.
The full list of companies to have signed up to the commitments includes Amazon, Anthropic, Cohere, Google (including DeepMind), G42, IBM, Inflection AI, Meta, Microsoft, Mistral AI, Naver, OpenAI, Samsung Electronics, Technology Innovation Institute, xAI, and Zhipu.ai.