Bletchley Declaration: Global Industry Reactions and Expert Opinions

KEY TAKEAWAYS

Overall, the Bletchley Declaration AI Safety Summit appears to have been a significant success - but there are critiques to be made.

The British Government has orchestrated a successful two-day summit, bringing together leaders worldwide to agree on a common-stance on AI safety.

In what has been described as a ‘diplomatic coup’ for the United Kingdom as part of its post-Brexit pivot – known as ‘Global Britain,’ the conference aimed to deliver unity in the face of the unprecedented challenges — both the benefits and the risks — of Artificial Intelligence (AI).

In attendance was the US Commerce Secretary Gina Raimondo, who joined US Vice President Kamala Harris and Chinese Vice-Minister of Science Wu Zhaohui amongst ranks of other leaders from across the EU and G7.

The meeting at Bletchley Park, where the Enigma Machine was cracked, was also visited by high-profile figures including Elon Musk.

Let’s look at the key outcomes and how experts around the world reacted at the close of the event.

What Happened at Bletchley Park AI Safety Summit?

As a result of the summit, the Bletchley Park Declaration on AI, focusing on the ‘frontier of AI’ (a reference to generative AI services like ChatGPT), has been signed by 27 countries and the European Union.

Advertisements

In the document, a two-point agenda is detailed for addressing so-called ‘frontier AI risk,’ these include:

  • Identifying AI safety risks of shared concern, building a shared scientific and evidence-based understanding of these risks
  • Adding to that understanding as capabilities continue to increase, in the context of a more comprehensive global approach to understanding the impact of AI in our societies.
  • Building respective risk-based policies across our countries to ensure safety in light of such risks, collaborating as appropriate while recognizing our approaches may differ based on national circumstances and applicable legal frameworks.
  • Debates around increased transparency by private actors developing frontier AI capabilities, appropriate evaluation metrics, tools for safety testing, and developing relevant public sector capability and scientific research.

How Has The World Reacted?

Digging down into reactions from leading thought leaders in Artificial Intelligence worldwide finds mixed responses.

Professor Anthony Cohn, Professor of Automated Reasoning at the University of Leeds and Foundational Models Theme lead at the Alan Turing Institute, told Science Media Centre:

“The present declaration is heavy on vision, but, unsurprisingly, light on detail, and it is likely that the “devil will indeed be in the detail” to actually create an effective regulatory regime internationally, which ensures safe deployment of AI, whilst also facilitating beneficial applications.

 

The involvement of many of the key countries and organisations worldwide adds to the credibility of the declaration. This is inevitably the first of many steps that will be needed and indeed two further meetings are already envisaged within the next year.

Meanwhile, Rashik Parmar, CEO of BCS, The Chartered Institute for IT, said:

“The Bletchley Declaration takes a more positive view of the potential of AI to transform our lives than many thought, and that’s also important to build public trust.

 

“I’m also pleased to see a focus on AI issues that are a problem today – particularly disinformation, which could result in personalised fake news during the next election – we believe this is more pressing than speculation about existential risk. The emphasis on global co-operation is vital to minimise differences in how countries regulate AI.

 

“After the summit, we would like to see government and employers insisting that everyone working in a high-stakes AI role is a licensed professional and that they and their organisations are held to the highest ethical standards.”

And Will Cavendish, global digital leader at sustainable development consultancy Arup and former DeepMind and UK Government advisor, said:

“It’s great to see the communique acknowledge the enormous global opportunities for AI as well as safety and regulation, which is rightly an important part of the summit. We can’t afford to be scared of AI, as we simply can’t solve humanity’s biggest challenges, like climate change and the biodiversity crisis, without embracing safe and effective AI.

 

“When examining regulation, attendees at the summit must remember to consider an ‘ethic of use’ – where we have an obligation to use technology that will help humanity – rather than only an ethic of ‘do no harm’.

Ian Hogarth, the Chief of the UK’s AI Frontier Model Taskforce, released a thread breaking down the details of the Bletchley Park Summit, highlighting that bringing the USA, EU, and China together on the same page marks significant progress towards a global response.

This was a sentiment mirrored by Chong Ja Ian, Professor of Political Science at the National University of Singapore, who highlighted the importance of Sino-American collaboration in AI regulation.

“Paraphrasing the Biden administration’s language, AI may be an area where the US and China must find areas where they can cooperate – especially since both are important actors where it comes to AI,”

However, Dr Marc de Kamps, Associate Professor in the University of Leeds’ School of Computing, whose areas of expertise include Machine Learning, points out:

“A moratorium on risky AI will be impossible to enforce. No international consensus will emerge about how this should work. From a technological perspective it seems impossible to draw a boundary between ‘useful’ and ‘risky’ technology and experts will disagree on how to balance these.

 

“The government, rightly, has chosen to engage with the risks of AI, without being prescriptive about whether certain research is off limits.

 

“However, the communique is unspecific about the ways in which its goals will be achieved and is not explicit enough about the need for engagement with the public.”

As for Aldo Faisal, Professor of AI & Neuroscience, Imperial College London, was optimistic as he said:

“This is a very sensible declaration reflecting the current thinking. I particularly welcome that the focus has shifted from recent emphasis on science fiction-inspired existential risks towards pragmatic and multilateral regulation of AI.

 

“The real work starts now on fleshing out how to practically and sensibly structure regulation and its compliance.”

Elon Musk, on the other-hand, shared a political illustration, hinting that while the Bletchley Park declaration takes a unified approach to identifying risks, it does little to prevent them from occurring.

The Bottom Line

Overall, the AI Safety Summit appears to have been a significant success on several fronts.

Firstly, it was a success for British Prime Minister Rishi Sunak, who hosted the Summit as part of a diplomatic effort towards improving the UK’s presence on the world stage.

Secondly, the Summit appears to have been a success in bringing major geopolitical players such as the United States, EU, and China together for a common approach to unfolding AI safety.

However, it remains to be seen whether the Bletchley Park Declaration will become an effective instrument for international collaboration on AI safety amid sparse details and limited enforcement powers.

Advertisements

Related Reading

Related Terms

Advertisements
Sam Cooling
Crypto & Blockchain Writer

Sam Cooling is a crypto, financial, and business journalist based in London. Along with Techopedia, his work has been published in Yahoo Finance, Coin Rivet, and other leading publications in the financial space. His interest in cryptocurrency is driven by a passion for leveraging decentralized blockchain technologies to empower marginalized communities worldwide. This includes enhancing financial transparency, providing banking services to the unbanked, and improving agricultural supply chains. Sam has a Master’s Degree in Development Management from the London School of Economics and has worked as a Junior Research Fellow for the UK Defence Academy.