Inside Davos: Sam Altman Reflects on the Realities and Future of AI

Why Trust Techopedia

Sam Altman gave a wide-ranging and somewhat soul-searching look into his psyche as he spoke at the World Economic Forum in Davos yesterday.

OpenAI’s chief executive officer has had an astonishing, controversial two years — from unleashing ChatGPT onto the world, to a very brief ouster from his role at the company, to a lawsuit with the New York Times around the use of articles to train Large Language Models (LLMs) — he is the face of a technology that without exaggeration is rapidly changing the world.

So, Davos was a moment where world leaders, technology leaders, and the world at large could reflect, discuss, and even argue about the road artificial intelligence takes us down.

Key Takeaways

  • Sam Altman, CEO of OpenAI, discussed the profound impact of artificial intelligence (AI) on society during the World Economic Forum in Davos.
  • Altman acknowledged the increasing stress and tension associated with advancements in artificial general intelligence (AGI) and emphasized the need for caution and preparedness.
  • However, he defends the widespread use of the tool, expressing optimism about humans making ethical decisions regarding AI use and recognizing the limitations and dangers.
  • Panelists discussed trust in AI, the evolving role of large language models (LLMs), and the ethical considerations related to training data, particularly in OpenAI’s lawsuit with the New York Times.

Speaking about the progress towards artificial general intelligence (AGI), Altman called the progression so consequential that it is taking its toll on all involved:

“One thing I’ve observed for a while is that every step we take closer to very powerful AI, everybody’s character gets like plus 10 crazy points.

 

“It’s a very stressful thing, and it should be, because we’re trying to be responsible about very high stakes.

 

“As the world gets closer to AGI, the stakes, the stress, the level of tension, that’s all going to go up.”

Altman related to this stress in answer to a question about what he referred to as his “ridiculous” removal from the leadership of OpenAI.

“This was a microcosm of it, but… as we get closer to very powerful AI, I expect more strange things. Having a higher level of preparation, more resilience, more time spent thinking about all of the strange ways things can go wrong, that’s really important.”

Advertisements

The Question of Safety

Altman, who has been at the center of the growing debate surrounding the safety of AI systems for humanity, takes a positive view — referring to generative AI in relatively benign terms as “a system that is sometimes right, sometimes creative, often totally wrong — you actually don’t want that to drive your car.

“But you’re happy for it to help you brainstorm what to write about or help you with code that you get to check.”

He maintains that humans are capable of making the right decisions about the ethical use of AI:

“People understand tools and the limitations of tools more than we often give them credit for, and people have found ways to make ChatGPT useful to them and understand what not to use it for.”

Even with the development of advanced AI models, humans “will make decisions about what should happen in the world… The OpenAI style of model is good at some things, but not good at a life and death situation.

“Now there’s a harder question than the technical one, which is who gets to decide what those values are — what the defaults are, what the bounds are — and how does it work in this country versus that country?

“What am I allowed to do with it versus not? So that’s a big societal question, one of the biggest.

“I think a very good sign about this new tool is that, even with its very limited current capability and its very deep flaws, people are finding ways to use it for great productivity gains — or other gains — and understand the limitations.

“AI has been somewhat demystified because people use it now, and I think that’s always the best way to pull the world forward with a new technology.”

Addressing concerns about the extent to which AI could replace human tasks, Altman noted that: “It does feel different this time. General purpose cognition feels so close to what we all treasure about humanity.”

And yet, “humans really care about what other humans think. That seems very deeply wired into us.”

On the same Davos panel, Marc Benioff, CEO of Salesforce, said that AI technology today is not at a point of replacing human beings but at a point of augmenting them. But he sounded a note of caution on the technology’s future direction:

“We just want to make sure that people don’t get hurt. We don’t want something to go really wrong… We’ve seen technology go really wrong and we saw Hiroshima—we don’t want to see an AI Hiroshima, we want to make sure that we’ve got our head around this now.

 

“That’s why I think these conversations, and this governance, and getting clear about what our core values are, is so important. Yes, our customers are going to get more margin — those CEOs are going to be so happy — but at the end of the day we have to do it with the right values.”

Can We Trust Large Language Models?

Benioff added that the rapid development of AI capabilities is raising questions of trust.

“The trust comes right up the hierarchy pretty quick — we’re going to have digital doctors, digital people, and these digital people are going to merge, and there’s going to have to be a level of trust.

“We are at this threshold moment because we’re all using Sam’s products and other products and going ‘Wow!’. We’re having this incredible experience with AI, we have not quite had this kind of interactivity before — but we don’t trust it quite yet.

“We also have to turn to those regulators and say that if you look at social media over the last decade, it’s pretty bad — we don’t want that in our AI industry, we want to have a good healthy partnership with these regulators.”

Intel CEO Pat Gelsinger, speaking in a CNBC interview at Davos, said: “You’ve now reached the end of today’s AI utility. This next phase of AI, I believe, will be about building formal correctness into the underlying models.

“Certain problems are well solved today in AI, but there are lots of problems that aren’t.

“How do you prove that a large language model (LLM) is actually right? There are a lot of errors today.

“You still need to know, essentially, “I’m improving the productivity of a knowledge worker”. But at the end of the day, I need the knowledge worker to say is it right.”

Clara Shih, CEO of Salesforce AI, told CNBC that the best way to improve the accuracy of LLMs is through experimentation and co-piloting tests. AI systems can adjust as users get comfortable that the technology can be trusted in high-stakes scenarios.

Shih said three phases of AGI will guide adoption:

  • actively using the technology as a work assistant;
  • watching the technology in autopilot mode to ensure its accuracy;
  • finally, trusting the technology to work as it should.

“You can tell the AI to be conservative for higher stakes until a human co-pilot essentially graduates it to autopilot,” Shih said.

“It Could Go Very Wrong”

Back to Altman at Davos, who — even as an AI optimist — concedes that the doomsayers warning of the potential damage AI could cause to humanity are not “guaranteed to be wrong”.

“There’s a part of it that’s right, which is this a technology that’s clearly very powerful, and we don’t know — we cannot say with certainty — exactly what’s going to happen.

 

“That’s the case with all new major technological revolutions, but it’s easy to imagine with this one that it’s going to have massive effects on the world and that it could go very wrong.

 

“Not having caution, not feeling the gravity of what the potential stakes are would be very bad, so I like that people are nervous about it.

He said of the team at OpenAI: “We have our own nervousness, but we believe that we can manage through it.

“The only way to do that is to put the technology in the hands of people — let society or technology co-evolve and, sort of step by step with a very tight feedback loop and course correction, build these systems that deliver tremendous value while meeting the safety requirements.

“The technological direction that we’ve been trying to push it in is one that we think we can make safe, and that includes a lot of things.

“We believe in iterative deployment, so we put this technology out into the world along the way, so people get used to it, so we have time as a society — our institutions have time — to have these discussions to figure out how to regulate this, and how to put some guardrails in place.

“It’s good that people are afraid of the downsides of this technology — it’s good that we’re talking about it — it’s good that we and others are being held to a high standard.

“We can draw on a lot of lessons from the past about how technology has been made to be safe and also how the different stakeholders in society have handled their negotiations about what safe means and what is safe enough.

“But I have a lot of empathy for the general nervousness and discomfort of the world towards companies like us and the other people doing similar things.

“It is on us to figure out a way to get the input from society about how we’re going to make these decisions — not only about what the values of the system are, but what the safety thresholds are, and what kind of global coordination we need to ensure that stuff that happens in one country does not negatively impact another.”

OpenAI vs New York Times: The Ethics of Managing Training Content

One of the aspects of generative AI that will require new approaches is compensating content owners for using their content in training data, Altman told the Davos panel.

The New York Times is suing OpenAI and Microsoft, claiming that they copied millions of Times articles to train the large language models that power ChatGPT and Microsoft Copilot. The lawsuit states that these models “threaten high-quality journalism” by affecting news outlets’ ability to protect and monetize their content.

“There’s a great need for new economic models,” Altman said. “I think the current conversation is focused a little bit at the wrong level, and I think what it means to train these models is going to change a lot in the next few years.”

Altman said that what OpenAI aims to do with data from the New York Times and other publishers is link out to them as sources of real-time information in response to users’ queries rather than using them to train the model.

“We could also train the model on it, but… we’re happy not to do that with any specific [provider]. But if you don’t train on any data, you don’t have any facts [to train the model on]”, Altman added.

OpenAI was hoping to train on Times data, “but it’s not our priority; we actually don’t need to train on their data,” Altman said. “This is something that people don’t understand — any one particular training source, that doesn’t move a needle for us that much.”

Altman suggests the next stage of development for LLMs will be the ability to reason on smaller, high-quality datasets.

“The next thing that I expect to start changing is these models will be able to take smaller amounts of higher-quality data during their training process and think harder about it and learn more… As our models begin to work more that way, we won’t need the same massive amounts of training data.

“But what we want in any case is to find new economic models that work for the whole world, including content owners.

“I think it’s clear that if you read a textbook about physics, you get to go do physics later with what you learned — and that’s kind of considered okay.

“If we’re going to teach someone else physics using your textbook and using your lesson plans, we’d like to find a way for you to get paid for that.

“If you teach our models, if you help provide human feedback, I’d love to find new models for you to get paid based on the success of that.”

The Bottom Line

History is littered with humans making mistakes — innocently or maliciously, taking a few steps forward and then a few steps backward in the pursuit of safety, freedom, opportunity, and curiosity.

Whether AI finds a form of sentience, its mimicked version of intelligence is already massively transforming the world — and it’s achieved that in less than 18 months since the public was able to use it with any scale.

In some places, we are already quite happy to hand over control to AI, or at least seek and follow its advice.

But we have an awful lot to consider and a limited window to do it in.

While world leaders discuss and debate the genie currently out of the box at Davos, one thing is certain: When they gather again next year, it will feel like a decade has passed in the AI landscape.

Advertisements

Related Reading

Related Terms

Advertisements
Nicole Willing
Technology Journalist
Nicole Willing
Technology Journalist

Nicole is a professional journalist with 20 years of experience in writing and editing. Her expertise spans both the tech and financial industries. She has developed expertise in covering commodity, equity, and cryptocurrency markets, as well as the latest trends across the technology sector, from semiconductors to electric vehicles. She holds a degree in Journalism from City University, London. Having embraced the digital nomad lifestyle, she can usually be found on the beach brushing sand out of her keyboard in between snorkeling trips.