Sam Altman’s Stanford University Talk on AGI: 5 Things We Learned

Why Trust Techopedia
KEY TAKEAWAYS

  • Sam Altman, CEO of OpenAI, discussed the future of AI at a Stanford University talk.
  • He critiqued GPT-4's performance, hinting at even more impactful future models - calling the current world-changing model 'embarrassing'.
  • Altman is eager to invest heavily in AGI's development, prioritizing long-term benefits over cost concerns.
  • Iterative deployment of AI models may be one way to mitigate risks, rather than large launches.
  • He proposed equitable global access to computing resources as a human right, and acknowledged AI's potential for misuse.

Every technology needs a champion, and when it comes to artificial intelligence (AI), OpenAI founder and CEO Sam Altman is as close as it gets.

Since helping to found OpenAI in 2015, he’s not only gone on to release popular AI-driven products like ChatGPT, Dalle-E 3, and Sora, but he has also become a leading advocate for Artificial General Intelligence development.

At a Q&A session at the Stanford Seminar for Aspiring Entrepreneurs earlier this week, Sam Altman sat down with Stanford adjunct lecturer Ravi Belani to give his thoughts on the future of AI development and how to mitigate risks on the road to AGI. The session is viewable in its entirety on YouTube.

Following the session, we will examine some of its main talking points and break down its broader implications for the development of AI and AGI. 

Sam Altman’s Stanford University Talk on AGI: 5 Things We Learned

1. GPT-4 is ‘Embarrassing’

One of the most interesting points made during the central discussion with Belani was that Altman wasn’t impressed by GPT-4’s performance.

He said:

“ChatGPT is not phenomenal. ChatGPT is mildly embarrassing at best. GPT-4 is the dumbest model any of you will have to use again…by a lot.”

What’s crazy about these comments is that a “mildly embarrassing” large language model (LLM) has completely changed the technological landscape.

For example, ChatGPT has achieved adoption among 80% of Fortune 500 companies, reached 100 million weekly active users, and could even contribute to the automation of 300 million full-time jobs.

This begs the question, what type of impact would, by Sam’s definition, a ‘good’ LLM have on society and the global economy?

If Altman is correct that GPT-4 will pale in comparison to the next generation of models, then it’s time to prepare for a very disruptive few years.

2. The Cost of AGI Doesn’t Matter

During the event, Altman made a stunning admission that he doesn’t care what the price of AGI is; he’ll find a way to pay it.

“Whether we burn $500 million a year or $5 billion or $50 billion a year, I don’t care. I genuinely don’t as long as we can stay on a trajectory where eventually we create way more value for society than that.

“And as long as we can figure out a way to pay the bills — like we’re making AGI, it’s gonna be expensive, it’s totally worth it.”

For Altman, the ends of AGI justify the means, and the price paid to develop the technology will pale in comparison to the economic value that it brings to the global economy.

Even Altman’s highest estimate of $50 billion a year is a drop in the ocean next to the economic value that AI has to offer. For evidence of this we need look no further than Bank of America’s estimate that AI will boost the world economy by $15.7 trillion by 2030.

3. Access to Compute will Become a Human Right

Perhaps Altman’s most shocking statement was the idea that access to computing resources could eventually become a human right.

“No matter where the computers are built, I think global and equitable access to use the computers for training as well as inference is super important.

“One of the things that is very core to our mission is that we make ChatGPT available for free to as many people as want to use it, with the exception of certain countries where we either can’t or don’t for a good reason.

“How we think about making training compute more available to the world is going to become increasingly important. I do think we get to a world where we sort of think about it as a human right to get access to a certain amount of compute, and we’ve got to figure out how to distribute that to people all around the world,” Altman concluded.

Altman’s comments highlight that as AI and automation play a larger role in the global economy, less economically developed countries will need support to invest in compute infrastructure. Otherwise, they will struggle to compete against countries that do.

4. Don’t Pretend that AI is All Good

During the Q&A session, an attendee asked how society could respond to concerns over the misuse of AI, particularly regarding global conflicts and elections. This question led Altman to acknowledge that AI wasn’t all good.

“One thing that I think is important is not to pretend like this technology or any other technology is all good.

 

I believe that it will be tremendously net-good, but I think like with any other tool it’ll be misused — like you can do great things with a hammer and you can kill people with a hammer.

“I don’t think that absolves us, or you all, or society from trying to mitigate the bad as much as we can and maximize the good, but I do think it’s important to realize that with any sufficiently powerful tool, you do put power in the hands of tool users or you make some decisions that constrain what people in society can do.”

We have already seen how a minority of users are willing to misuse AI to create deepfakes, phishing scams, and automated cyber attacks, and there is the potential for more of these threats to emerge in future.

Altman also highlighted that OpenAI, society, and elected representatives all have a voice in mitigating these risks but warned that society will not initially achieve the right balance.

For him, the solution lies in establishing a tight feedback loop between these stakeholders and balancing safety versus freedom and autonomy.

5. Balancing Innovation and Responsible Deployment

Finally, a junior at Stanford asked Altman about how OpenAI plans to balance human innovation and responsible development of AGI, and Altman’s response unveiled some interesting insight into how risks could be mitigated.

“I think as the models get more capable, we have to deploy even more iteratively — have an entire feedback loop looking at how they’re used and where they work and where they don’t work,” Altman said.

“This world that we used to do where we can release a major model or update every couple of years — we’ll probably have to find ways to like increase the granularity on that and deploy more iteratively than we have in the past.

“And it’s not super obvious to us yet how to do that, but I think that will be key to responsible deployment.”

In short, vendors that want to support AI development more responsibly can do so to an extent — simply by slowing down and taking small steps.

The Bottom Line

AI development is in a wild place at the moment. While tools like ChatGPT and Gemini have captured lots of attention, no one, not even Sam Altman, has any idea of what this technology’s true limits are.

If there’s one message to take home from this Stanford Q&A, it’s that the role of AI is constantly evolving and that everyone has a chance to make their voice heard and guide the development process.

Related Terms

Related Article