In the year since the launch of OpenAI’s ChatGPT chatbot, Artificial Intelligence (AI) has made strides towards the next phase — Artificial General Intelligence (AGI), in which AI systems demonstrate human-like cognitive abilities.
While this opens up new opportunities for technological development, it also poses risks to humans, ranging from algorithm bias to the possibility of existential destruction.
The question of AI safety has been amplified since the sudden ouster and rehiring of OpenAI CEO Sam Altman, which prompted questions about the rapid pace of the company’s AI development and whether it has been operating with sufficient safeguards in place.
On December, 18, OpenAI released an initial AI Preparedness Framework, stating:
“The study of frontier AI risks has fallen far short of what is possible and where we need to be. To address this gap and systematize our safety thinking, we are adopting the initial version of our Preparedness Framework.
“It describes OpenAI’s processes to track, evaluate, forecast, and protect against catastrophic risks posed by increasingly powerful models.”
A recent survey of AI engineers conducted by early-stage technology investment firm Amplify Partners found that the average AI engineer thinks there is around a 40% chance that AI can destroy the world.
The possibility is widely discussed in the industry that the term p(doom), meaning “probability of doom”, has moved beyond message board jokes to become a measure of the odds that AI will cause a doomsday scenario. The higher the p(doom) number on a scale of 0-100, the more likely an expert believes it is that AI will kill us all.
“There is no consensus,” Amplify stated in its analysis of the survey results.
“But <1% think there is 100% chance; 12% think there is no chance. At the same time, the majority of folks think that p(doom) < 50%, and most also think that p(doom) > 1%.
“Note that we did not define P(doom) or a time horizon in the survey for participants.”
Unsurprisingly, the team at UK-based startup Conjecture, which is focused on AI safety, sees a higher probability of doom.
READ MORE: Could AI Cause Human Extinction?
Respondents to an internal survey estimated a 70% chance of human extinction from advanced AI getting out of control and an 80% chance of human extinction from advanced AI in general, from loss of control and misuse risk.
What are the potential risks posed by AGI, and how do AI experts view those risks?
AI’s Existential Threats
The proliferation of AI systems could result in social manipulation that increases the risk of widespread totalitarianism, cyber-attacks that create geopolitical instability, and engineering-enhanced pathogens for biological warfare.
READ MORE:
- The World Needs 4M More Cybersecurity Experts — Now
- The Best Cybersecurity Certifications for 2024
- The Best Cybersecurity Schools and Classes
Further, the emergence of AGI could progress to the creation of superintelligence that could develop beyond human control.
As Nick Bostrom, philosopher and founder of the Future of Humanity Institute at the University of Oxford, explains in the “paperclip maximizer” thought experiment, “a superintelligent agent that wanted to maximize the number of paperclips in existence, and that was powerful enough to get its way… might then want to eliminate humans to prevent us from switching it off (since that would reduce the number of paperclips that are built). It might also want to use the atoms in our bodies to build more paperclips.”
A superintelligent system would likely anticipate that humans may want to take control and would create safeguards to prevent being switched off.
Bostrom added:
“It is worth noting that even systems that have no independent will and no ability to plan can be hard for us to switch off. Where is the off-switch to the entire Internet?”
The debate over the probability of doom finds leading AI developers at odds in their views, with optimists pushing back against the “doomers”.
Theoretical computer scientist Scott Aaronson gives a low probability to the paperclip-maximizer-like scenario and a higher likelihood for an existential catastrophe that involves AI in some way but ultimately argues that making a firm prediction would require a discussion about what it would mean for AI to play a critical role in causing an existential catastrophe.
READ MORE:
At the other end of the scale, AI researcher Eliezer Yudkowsky argues that AI development must be shut down; otherwise, “everyone will die”.
Paul Christiano, who runs the Alignment Research Center and previously headed the language model alignment team at OpenAI, said earlier this year he has been debating Yudkowsky back and forth over the last 12 years about the pace of AGI development.
“The most likely way we die is not that AI comes out of the blue and kills us, but involves that we’ve deployed AI everywhere. And if for some reason they were trying to kill us, they would definitely kill us.”
Christiano estimates a 46% probability that humanity will somehow ‘irreversibly mess up our future’ within a decade of building powerful AI systems, a 22% probability of an AI takeover, and an additional extinction probability of 9%.
In line with Aaronson’s view that humans could cause destruction that involves AI Indirectly, Christiano puts the “probability of messing it up in some other way during a period of accelerated technological change” at 15%.
If you want to see more opinions on risk, this handy infographic shows individual and group-based estimates of the possibility of AI extinction:
https://twitter.com/betafuzz/status/1729896912411672782
Will AI Safeguards Come Soon Enough?
Some AI engineers note that humans have control over the development of these systems, so all is not yet lost when it comes to putting safeguards in place. However, the pace of growth is challenging that control.
“If you just follow the trend of the progress we’ve seen in the last few years and ask… is there a chance we could achieve in many areas or abilities comparable or better than humans? A lot of people in AI think that’s very likely, let’s say, in the next decade,” AI expert Yoshua Bengio said in a recent FT podcast.
“If it comes within the next decade or worse, within the next few years, I don’t think (and many others don’t think) that society is organized, is ready to deal with the power that this will unleash and the disruptions it could create, the misuse that could happen.
“Or worse, what I’ve started to think more about this year is the possibility of losing control of these systems. There is already a lot of evidence that they don’t behave the way that we want them to.”
As a counterpoint to that view, Yann LeCun, Chief AI Scientist at Meta, is an optimist and expects the emergence of superhuman AI to be progressive, with systems as intelligent as “baby animals” and guardrails that ensure these systems remain safe as they scale up. While at some point, systems will become smarter than humans, they will not necessarily be sentient and will remain under control.
The emergence of superhuman AI will not be an event. Progress is going to be progressive.
It will start with systems that can learn how the world works, like baby animals.
Then we'll have machines that are objective driven and that satisfy guardrails.
Then, we'll have machines…— Yann LeCun (@ylecun) December 17, 2023
The Bottom Line
AI engineers and researchers will continue to debate the various safety issues that AI raises as open-source and private models develop rapidly.
That so many developers are aware of the possibility of AI advancing beyond human control to cause destruction may help ensure that adequate safeguards will be implemented.
However, the future of AI systems and their culpability in the unfolding of events will remain uncertain.