OpenAI Forms Safety Committee As It Trains the Successor to GPT-4

Why Trust Techopedia
Key Takeaways

  • OpenAI is launching a Safety and Security Committee.
  • The panel will include CEO Sam Altman as well as board chair Bret Taylor.
  • The company also teased work on its "next frontier model" to replace GPT-4.

The OpenAI board has created a Safety and Security Committee to guide its decisions, and has also started training the “next frontier model” to replace GPT-4.

Board chair Bret Taylor will helm the safety panel, which will also include CEO Sam Altman as well as directors Adam D’Angelo and Nicole Seligman. The team will also include a range of company policy and technology leaders, including current Chief Scientist Jakub Pachocki as well as  Aleksander Madry, Lilian Weng, John Schulman, and Matt Knight.

OpenAI said it would also tap outside experts for support, including former cybersecurity officials like John Carlin and Rob Joyce.

The committee’s first project is to examine and advance the company’s “processes and safeguards” over the course of 90 days. It will then share its recommendations with the OpenAI board, which will share how it will adopt any safety and security recommendations that emerge from the review.

The new large language model, meanwhile, is poised to bring OpenAI to the “next level of capabilities” as it works toward a true general-purpose artificial intelligence. The company didn’t share more details. GPT-4o is a bridge of sorts that brings speech and video input to all users, including real-time conversations with interruptions and simulated emotion.

The moves came just after OpenAI disbanded the Superalignment team developing an AI that would conduct safety checks on models and prevent rogue behavior. The leaders of the team, former Chief Scientist Ilya Sutskever and Jan Leike, left the company earlier in May.

The firm has already faced both internal and external turmoil over safety and security issues. The OpenAI board briefly ousted Altman in part over worries about how he handled AI safety. Government agencies in the US, UK and elsewhere are either working on regulations or setting their own guidelines. The committee theoretically helps OpenAI address problems before they become crises.

The reliance on an in-house committee contrasts sharply with Meta’s Oversight Board. That organization makes policy recommendations while staying independent of Meta, complete with experts who aren’t involved in the social media giant’s operations. It’s not yet clear how OpenAI will respond to suggestions, though, and the company is under pressure to compete with models like Google’s Gemini even as it tries to reassure politicians and critics.