A group of AI researchers is advocating establishing a contingency plan aimed at preventing catastrophic outcomes if AI systems become uncontrollable.
In a recent statement, they expressed concerns about the possibility of losing human oversight over AI, which could lead to dangerous consequences for society.
Call for Global AI Oversight
The statement, discussed by The New York Times, points to the lack of current safety measures and calls for an international governance framework.
The group emphasizes that nations need to create authorities capable of detecting and responding to AI-related incidents and addressing associated risks.
Additionally, there is a growing need to manage AI models that could potentially pose significant dangers on a global scale.
The scientists’ proposal follows discussions from the International Dialogue on AI Safety, held earlier this month in Venice.
This event, organized by the nonprofit Safe AI Forum, brought together AI experts from several countries to assess the growing risks posed by AI advancements.
Leading computer scientists from around the world, including @Yoshua_Bengio, Andrew Yao, @yaqinzhang and Stuart Russell met last week and released their most urgent and ambitious call to action on AI Safety from this group yet.🧵 pic.twitter.com/qXknvCMBTV
— International Dialogues on AI Safety (@ais_dialogues) September 16, 2024
Notably, the proposal letter for the contingency plan was signed by over 30 scientists from countries including the U.S., China, the U.K., Canada, and Singapore.
One of the key figures involved, Johns Hopkins University Professor Gillian Hadfield, highlighted the critical lack of a global authority capable of responding to potential crises stemming from autonomous AI systems, further emphasizing the need for coordinated action.
The consensus statement emphasizing that AI safety is a global public good, from our International Dialogue on AI Safety in Venice last week here https://t.co/WQiYrijbo4
— Gillian Hadfield (@ghadfield) September 16, 2024
The Growing Risk of AI Advancements
The call for a global contingency plan comes amid growing concerns over the rapid development of AI systems and the declining scientific cooperation between superpowers like the U.S. and China.
In March, a report commissioned by the U.S. State Department warned of the “catastrophic” national security risks posed by rapidly evolving AI technology.
The report, based on interviews with over 200 experts, painted a grim picture of the potential for AI to pose an extinction-level threat to humanity if left unchecked.
While international bodies like the G7 and the United Nations have begun outlining frameworks for handling AI’s growth, concerns persist.
Many tech executives argue that over-regulation could hinder innovation, especially in the European Union, where regulatory discussions are more stringent