OpenAI Collaborates with US AI Safety Institute Ahead of New Initiatives

Why Trust Techopedia
Key Takeaways

  • OpenAI has reassured US lawmakers of its commitment to safely deploying AI technologies.
  • This response follows concerns raised by five US senators regarding the company's safety practices.
  • OpenAI has allocated 20% of its computing resources to safety-related research and revised its employee non-disparagement policies.

OpenAI has sought to reassure US lawmakers that it is committed to rigorous safety protocols and responsible AI development. 

This statement comes after five senators, including Senator Brian Schatz of Hawaii, expressed apprehension about the company’s safety measures. 

Commitments and Actions of OpenAI and the US AI Safety Institute 

OpenAI’s chief strategy officer, Jason Kwon, responded to a recent letter addressed to the company’s CEO, Sam Altman, by five US senators on OpenAI’s policies. He emphasized that OpenAI “is dedicated to implementing rigorous safety protocols at every stage of our process,” underscoring its comprehensive safety protocols and initiatives.

CEO Sam Altman also announced on X post that OpenAI is working closely with the US AI Safety Institute. The collaboration aims to provide early access to OpenAI’s next foundation model, which will assist in advancing AI evaluation science.

This latest development comes on the heels of criticism after OpenAI dissolved a team dedicated to creating safeguards against potentially harmful “superintelligent” AI systems. This led to the departure of senior executives Jan Leike and Ilya Sutskever.

In response to criticism, OpenAI recently lifted non-disparagement agreements for current and former employees, a move aimed at fostering transparency and addressing concerns about restrictive internal policies. OpenAI pledged to dedicate 20% of its computing resources to safety-related research over the coming years.

Notably, OpenAI has established a safety and security committee tasked with reviewing the company’s processes and policies. This committee is part of a wider initiative to address controversies regarding OpenAI’s commitment to safety, including incidents where employees felt unable to voice concerns. The formation of this committee follows the dissolution of OpenAI’s Superalignment team earlier this year, which raised questions about the company’s prioritization of safety.

Earlier this year the National Institute of Standards and Technology (NIST) formally established the US AI Safety Institute, following its announcement by Vice President Kamala Harris at the UK AI Safety Summit in 2023. 

This consortium, described by NIST as essential for developing science-based guidelines for AI safety, involves collaboration with companies like Anthropic, Google, Microsoft, Meta, Apple, Amazon, and Nvidia

OpenAI Faces Criticism Despite Industry and Government Collaboration

Despite OpenAI’s reassurances, concerns persist regarding the company’s safety culture. 

Bloomberg reported growing worries within the industry about OpenAI’s focus on safety, especially as it pursues the development of more powerful AI models. 

This concern echoes the controversy when Altman was briefly ousted as CEO, which was initially speculated to be driven by security and safety concerns, but was later attributed to a “breakdown in communication.”

Notably, Altman serves on the US Department of Homeland Security’s Artificial Intelligence Safety and Security Board, providing guidance on the safe development and deployment of AI across critical US infrastructures. 

OpenAI has also significantly increased its federal lobbying efforts, spending $800,000 in the first half of 2024 compared to $260,000 for all of 2023.

The US AI Safety Institute, housed within the Commerce Department’s NIST, also collaborates with a consortium of leading tech companies. 

This group is responsible for implementing actions from President Biden’s AI executive order, including developing guidelines for AI safety practices like red-teaming, capability evaluations, risk management, and watermarking synthetic content.