OpenAI and Anthropic Team Up with US AI Safety Institute for Research

Why Trust Techopedia
Key Takeaways

  • OpenAI and Anthropic will test and evaluate their new AI models before release.
  • The collaboration focuses on identifying and mitigating AI risks.
  • Institute to open a San Francisco office for wider AI industry engagement.

OpenAI and Anthropic agree to let the U.S. AI Safety Institute test their AI models before public release, aiming to mitigate potential risks.

The call for AI safety got a new boost on August 29, when the U.S. AI Safety Institute joined hands with artificial intelligence companies OpenAI and Anthropic to allow the Institute to test and evaluate their new AI models before release.

This agreement establishes a framework for formal collaboration on AI safety research, testing, and evaluation. It gives the U.S. AI Safety body, which operates under the National Institute of Standards and Technology (NIST), the leeway to test the capabilities and potential risks of OpenAI and Anthropic AI models and develop methods to mitigate these risks before they hit public space.

Elizabeth Kelly, director of the U.S. AI Safety Institute, emphasized the importance of this collaboration, stating, “These agreements are just the start, but they are an important milestone as we work to help responsibly steward the future of AI.”

The institute plans to provide feedback to both companies on safety improvements for their models, working in close collaboration with its counterpart, the U.K. AI Safety Institute, following an MoU signed by both bodies last April.

In addition to this agreement, the U.S. AI Safety Institute is set to open an office in San Francisco to expand its reach, hire more top talent, work closely with the local AI community, and engage more with the wider AI research ecosystem.

This agreement comes a few days after X and SpaceX CEO Elon Musk threw his weight behind a California AI Safety Bill (SB 1047) that will force top AI development companies to subject their AI models to safety tests before launching them. Another related bill (AB 3211) that will mandate AI model makers to watermark AI-generated content is also being discussed.