OpenAI CEO Sam Altman Steps Down from Safety and Security Committee

Why Trust Techopedia
Key Takeaways

  • OpenAI CEO Sam Altman will no longer serve on the Safety and Security Committee.
  • Carnegie Mellon professor Zico Kolter will lead the new committee.
  • The move follows increasing scrutiny from US lawmakers and internal criticism over OpenAI’s safety policies.

OpenAI announced on Monday that its CEO, Sam Altman, will no longer serve on the company’s Safety and Security Committee. 

The decision comes as OpenAI faces increased pressure from lawmakers and growing concerns over handling artificial intelligence (AI) risks.

OpenAI Restructures Safety Committee Amid Growing Scrutiny Over AI Risks

According to a September 16 blog post, OpenAI revealed that the newly formed committee will be chaired by Zico Kolter, a professor from Carnegie Mellon University, and will include Quora CEO Adam D’Angelo, retired General Paul Nakasone, and former Sony executive Nicole Seligman. The individuals sit as existing members of OpenAI’s board of directors.

They will now play a central role in reviewing the safety of OpenAI’s models, ensuring that any potential security issues are addressed before public release.

This restructuring follows a letter from five US senators who raised concerns about the safety protocols of OpenAI, specifically addressing Altman.

It was revealed that the committee had already conducted a safety review of OpenAI’s latest model, called o1 after Altman stepped down. But the committee will continue to receive updates from OpenAI’s internal safety teams and hold the authority to delay AI model releases if any unresolved risks remain.

Notably, Altman’s departure from the safety committee comes at a time when OpenAI is facing criticism from former staff members and researchers. Several employees who were focused on mitigating long-term risks associated with AI have left the company, some publicly criticizing Altman for opposing stricter AI regulations. These employees argued that OpenAI’s growing commercial ambitions could compromise the company’s commitment to safety.

OpenAI’s increasing lobbying efforts reflect this tension. The company’s federal lobbying budget has surged to $800,000 for the first half of 2024, compared to just $260,000 for all of 2023.

Meanwhile, Altman has taken a seat on the Department of Homeland Security’s AI Safety and Security Board, advising on AI’s role in critical infrastructure.

As OpenAI looks to raise more than $6.5 billion in new funding, some observers worry that the company may shift away from its hybrid nonprofit model, prioritizing investor returns over its founding mission to develop AI that benefits all of humanity.