Situational Awareness: Where Is AI Heading Next?

Why Trust Techopedia

Artificial General Intelligence (AGI) has been getting increasing amounts of airtime. Its imminent arrival is generating diverse responses.

Will it take our jobs? Will it destroy the planet? Will it facilitate human flourishing and efficiency beyond what we imagine or think possible?

It depends on who you ask.

The tension surrounding the pros and cons of AI and its rapid advancements is palpable. Most recently, it has been exemplified in a lengthy series of essays written by Leopold Aschenbrenner, a former employee of OpenAI’s disbanded superalignment team who was dismissed for allegedly leaking company secrets.

“Situational Awareness: The Decade Ahead” sets out a startling timeframe of advancements. According to Aschenbrenner, we will move from LLMs to AGI as early as 2027 and, conceivably, to superintelligence shortly after that. While it will be revolutionary, the arrival of AGI also poses real threats, and Aschenbrenner pulls no punches when outlining what he perceives to be potential catastrophes.

Whether you agree with the thesis or not, “Situational Awareness” encourages us to consider whether the world is prepared for the looming development of AGI, let alone superintelligence.

Advertisements

Key Takeaways

  • AGI could be achieved as early as 2027, with superintelligence following shortly after.
  • We might lose control as current alignment techniques could not manage superintelligent systems.
  • National security, especially related to espionage, is among the most significant risks.
  • Displacement of jobs and AI’s environmental impact are critical issues that remain unresolved.

The Jump: LLMs > AGI > Superintelligence

Aschenbrenner begins by outlining how GPT-2 to GPT-4 “took us from ~preschooler to ~smart high-schooler abilities in 4 years.” Given this rate of progress, “we should expect another preschooler-to-high-schooler-sized qualitative jump by 2027.” This would feasibly land us in the age of AGI.

Imagine millions of AI engineers vastly better than humans at finding, developing, and applying research. Imagine such an army “furiously working on algorithmic breakthroughs, day and night.”

Ben Goertzel, AI Researcher and CEO of SingularityNET, suggested:

“An AGI with the technical competence of human scientists and engineers will be able to study itself and improve itself, and scale itself up, triggering a very rapidly advancing ‘intelligence explosion.'”

Similarly, Aschenbrenner suggests that once AGI automates AI research and learns recursive self-improvement, the systems won’t take long to become superhuman, capable of “complicated behavior we couldn’t even begin to understand.”

At times, Aschenbrenner seems optimistic and excited about the capabilities of superintelligence. Soon, he deduces, “they’d solve robotics, make dramatic leaps across other fields of science and technology within years, and an industrial explosion would follow.”

Other predictions are characterized by deep concern. He said:

“Superintelligence would likely provide a decisive military advantage, and unfold untold powers of destruction. We will be faced with one of the most intense and volatile moments of human history.”

But whether you or I will be there to see it is certainly up for debate. Grady Booch, a Chief Scientist for Software Engineering at IBM Research, famously tweeted:

While Gary Marcus, a Professor Emeritus of Psychology and Neural Science at New York University, is not quite that sceptical, he consistently criticizes predictions about the current advancement rate and AGI’s near arrival.

According to Marcus, the great AI retrenchment has begun. He wrote in a recent blog post:

“It was always going to happen; the ludicrously high expectations from last 18 ChatGPT-drenched months were never going to be met. LLMs are not AGI, and (on their own) never will be; scaling alone was never going to be enough. The only mystery was what would happen when the big players realized that the jig was up, and that scaling was not in fact ‘All You Need.'”

But Marcus has been proven wrong in the past.

Meanwhile, Tesla CEO and X owner Elon Musk believes AGI will be here in 2025 or 2026 at the latest and hopes it will be “nice to us.”

We Could Lose Control, Couldn’t We?

As AI becomes smarter than humans, “There is a real possibility that we will lose control,” Aschenbrenner warns.

Building trust with LLMs and finding ways of mitigating the risks of hallucinations has been the topic of much discussion, but when it comes to superintelligence, the issues become far more complex and potentially unsolvable:

“There is a very real technical problem: our current alignment techniques (methods to ensure we can reliably control, steer, and trust AI systems) won’t scale to superhuman AI systems,” says Aschenbrenner.

Essentially, we would be “forced to hand off trust to AI systems,” hoping they wouldn’t drift too far from human values.

LLMs are already deceptive, but as Aschenbrenner suggests, their behavior could worsen. They could “learn to seek power, [or] learn to behave nicely when humans are looking and pursue more nefarious strategies when we aren’t watching.”

However, there is still not enough proof that AI might turn into human masters.

Examples of AI ‘lying’ mostly happen due to a flaw in machine learning algorithms.

Sometimes, an AI lacks the information to generate an accurate result due to conflicting information or poor-quality data. However, GenAIs aren’t sentient, they can’t ‘lie’ like humans.

Espionage & the Risk to National Security

In an interview with Business Insider about his dismissal from OpenAI, Aschenbrenner relayed how his concerns about protecting “key algorithmic secrets from foreign actors,” namely the Chinese Communist Party (CCP), were viewed as “racist” and “unconstructive.”

However, his concerns were not unfounded. The CCP has a history of infiltrating critical infrastructure organizations, an example being their Microsoft Exchange hack in 2021.

Algorithmic secrets are vital to America’s national defense, yet Aschenbrenner scoffs that security is slack and that leading AI companies are “basically handing the key secrets for AGI to the CCP on a silver platter.”

It’s not ideal, but he advocates for government intervention, arguing that “the preservation of the free world against the authoritarian states is on the line.”

Is AI Going to Take Our Jobs & Destroy the Planet?

It’s become almost cliché to complain that AI will steal our jobs, but it remains a genuine concern. If LLMs can already automate people out of work, just imagine the effect that AGI or superintelligence will have. Aschenbrenner talks of AI co-workers, but will there be any need for us at all?

Avital Balwit, Chief of Staff at Anthropic, said in a recent article:

“These next three years might be the last few years that I work…I stand at the edge of a technological development that seems likely, should it arrive, to end employment as I know it.”

Every advancement and new iteration certainly increases the risk of humans becoming obsolete, but there’s an even bigger issue.

In an article for Medium, Dirk Songuer, Honorary Professor of Computer Game Design at the University of Essex, voices concerns about how much energy AI systems require to function. He criticizes Aschenbrenner for not addressing an apparent “disconnect between the acceleration of required resources versus the achieved outcomes.”

Songuer’s point is valid, especially considering the world’s concerted efforts to reduce carbon intensity. However, Aschenbrenner does not acknowledge this as he enthusiastically describes how “millions of GPUs will hum” by 2030.

Building superintelligent machines that can wow the world is all well and good, but if their outcomes don’t have a meaningful impact, is it worth the drain on natural resources? For many, that kind of trade-off is too high a price.

Will AGI and superintelligence disrupt our existence?

It’s hard to imagine how it won’t, and Aschenbrenner’s provocative work has alerted the world to some significant issues that will accompany the advancement to end all other advancements.

In a tweet about “Situational Awareness,” Zach Vorhies, author of Google Leaks: A Whistleblower’s Exposé of Big Tech Censorship, said that everything is about to change.

The Bottom Line

While no one really knows what the future holds, let’s hope that once we’ve “summoned superintelligence, in all its power and might,” there will still be room for us in the new world order.

FAQs

What is the future of AI in 10 years?

Who is Leopold Aschenbrenner?

Where is AI going?

How advanced can AI get?

Advertisements

Related Reading

Related Terms

Advertisements
John Raspin
Technology Journalist
John Raspin
Technology Journalist

John Raspin spent eight years in academia before joining Techopedia as a technology journalist in 2024. He holds a degree in Creative Writing and a PhD in English Literature. His interests lie in AI and he writes fun and authoritative articles on the latest trends and technological advancements. When he's not thinking about LLMs, he enjoys running, reading and writing songs.