Why Safety Will Lose Out in OpenAI’s Race to AGI

Why Trust Techopedia

After a wave of PR scandals, OpenAI yesterday announced the launch of an AI safety council.

The OpenAI Board Forms Safety and Security Committee will be responsible for “making recommendations on critical safety and security decisions for all OpenAI projects.”

At first glance, this appears to be a good move to support responsible artificial intelligence development, but when considering that the committee is made up of insiders and comes amid a wave of team leaders resigning, including the implosion of its super alignment team, and allegations of an ethically dubious decision to use a Scarlett Johansson soundalike for GPT-4o, there is the question of whether OpenAI is the best suited to decide (or even follow) its own guidelines.

We explore the bumpy landscape in which ethics vs advancement are blurred in the pursuit of ethical AI.

Key Takeaways

  • OpenAI has announced the creation of a Safety and Security Committee to oversee critical safety and security decisions for all OpenAI projects amid recent PR scandals.
  • The committee consists of insiders and comes at a time when several key team leaders, including those from the super alignment team, have resigned, raising questions about OpenAI’s internal stability and ethics.
  • OpenAI needs to tread the line between pushing for Artificial General Intelligence (AGI) and maintaining ethical standards.
  • Scandals including allegedly using a voice similar to Scarlett Johansson’s for GPT-4o without her consent, lawsuits against publishers, and the outflow of key team members suggest OpenAI needs to tread carefully in evolving AI within ethical constraints.

Muddu Sudhaker, co-founder and CEO of software company Aisera, told Techopedia:

“In light of the brouhaha with Scarlett Johansson as well as the high-level departures of its super alignment team — such as with Ilya Sutskever and Jan Leike — OpenAI has been on the defensive. This has taken some of the attention away from the true innovations of GPT-4o.

Advertisements

“To that end, announcing a “Safety and Security Committee” appears to make sense. OpenAI wants to get more control over the message – and show that it is serious about responsible AI.” Sudhakar said.

Unfortunately for OpenAI, it’s going to take more than safety-washing its public image to eliminate the questions surrounding its commitment to responsible AI development.

Feel the AGI: OpenAI’s Mentality of AGI at Any Cost

Altman and OpenAI’s commitment to AGI is well publicized, even to absurd degree at times. Back in November 2023, The Atlantic reported that Sutskever was leading employees in chants of “feel the AGI!”

Elsewhere, Altman has been extremely vocal about the organization’s commitment to developing artificial intelligence. At a talk at Stanford University, Altman outright stated he doesn’t care how much it costs to build AGI.

And herein lies the problem: It is difficult for an organization that is racing toward Artificial General Intelligence to truly place safety at the heart of its mission.

Jan Leike, OpenAI’s ex head of superalignment said something similar in an X post where he warned that “safety culture and processes have taken a backseat to shiny products.”

Leike isn’t alone in voicing this criticism of the organization. Another ex-employee, Daniel Kototajlo, who joined OpenAI in 2022 and resigned in April 2024, told Vox: “I gradually lost trust in OpenAI leadership and their ability to responsibly handle AGI, so I quit.”

When considering these departures and OpenAI’s commitment to AGI, it’s difficult to imagine that a new safety council will put safety ahead of innovation.

This is doubly-true when we factor in the financial motivations OpenAI has to create AGI, as its transformed from a non-profit into a profit-driven juggernaut valued at $80 billion.

The Scarlett Johansson Fiasco: Controversy Over the Voice of ChatGPT

Unfortunately, OpenAI’s public image problems aren’t limited to its commitment to developing AGI.

Just recently, the AI lab came under fire after Scarlett Johansson released a statement saying she had received an offer from Altman who wanted her to voice GPT4. Johansson declined the request, before later realizing that a GPT-4o’s voice known as Sky, sounded like her.

The statement noted that “I was shocked, angered and in disbelief that Mr. Altman would pursue a voice that sounded so eerily similar to mine that my closest friends and news outlets could not tell the difference.”

In response, Altman released a statement claiming the voice of Sky wasn’t Johansson’s and claimed that the voice actor behind Sky’s voice was cast before any outreach to the actress.

Irrespective of what happened behind the scenes, this conflict has created a PR storm for OpenAI, with some commentators arguing that the AI vendor attempted to imitate Johnasson’s voice.

One such commentator is Tesla CEO Elon Musk who released a post on X saying,

“yeah, it’s obviously a ripoff. They were literally bragging about it,” referencing a post that Altman made earlier in the month which simply stated “her” — likely a reference to a 2013 film where Johansson portrayed a virtual agent the protagonist falls in love with.

At the very least, this entire fiasco presents serious questions about OpenAI’s management, and its willingness to accept boundaries.

From an outside perspective, and based on the information we have publicly available at this time, it appears that what’s best for business comes first for OpenAI.

ChatGPT’s House of Cards: Intellectual Property

Once again, this isn’t the only controversy that raises doubt over OpenAI’s commitment to responsible AI development. OpenAI’s flagship product sits on a house of cards, having been trained on the copyrighted works of writers and artists, often without consent.

This has led to much controversy from artists and writers who object to their content being taken and, in some instances, allegedly regurgitated word for word.

Most notably, we have The New York Times’ multi-billion dollar lawsuit, which alleges OpenAI created “a business model based on mass copyright infringement” by copying and using millions of copyrighted news articles to train its proprietary AI models, and create products which recite content verbatim while undermining the publication’s core business model as a competitor.

Prominent authors including John Grisham, George R.R. Martin and others have also sued OpenAI for alleged copyright infringement and using their work to train ChatGPT.

OpenAI has even admitted that “it would be impossible to train today’s leading AI models without using copyrighted materials.”

The Bottom Line

Together, the lawsuits and OpenAI’s statement highlight that the organization’s commitment to develop AI has led it to push ethical considerations to the extreme if it enables the organization to advance its own ends.

It will take more than a safety committee to make OpenAI rethink its approach to AI development. For the foreseeable future, OpenAI’s focus on AGI appears to come above safe and responsible AI development.

Advertisements

Related Reading

Related Terms

Advertisements
Tim Keary
Technology Specialist
Tim Keary
Technology Specialist

Tim Keary is a freelance technology writer and reporter covering AI, cybersecurity, and enterprise technology. Before joining Techopedia full-time in 2023, his work appeared on VentureBeat, Forbes Advisor, and other notable technology platforms, where he covered the latest trends and innovations in technology.