U.S. May Limit China’s Access to AI Models Like ChatGPT

Why Trust Techopedia
Key Takeaways

  • The U.S. Commerce Department is reportedly considering export restrictions on advanced AI models to China.
  • The policy would be limited to closed source models where code and data are secret.
  • Officials have already tried to limit China's access to AI processing power.

The U.S. government is mulling regulations that would limit exports of advanced, closed source AI models like OpenAI’s ChatGPT, according to Reuters sources.

The Commerce Department might judge the need for restrictions based on a computing power threshold from an executive order President Biden implemented last October. If an AI model requires enough computing resources to cross that milestone, the developer may have to report its plans to the Department and face export limits.

The approach would reportedly only apply to future AI models, as no existing systems are believed to exceed the threshold. Google’s Gemini Ultra might be close.

The Commerce Department declined to comment.

The White House and regulators have already restricted the export of AI-related processors to China since 2022. They’ve hoped to prevent that country’s government from militarizing further uses of AI. A model ban would prevent China from using readymade American software, regardless of the hardware that powers it.

A rule proposal this January would also require that American cloud computing providers like Amazon must notify the government if foreign clients train potentially dangerous AI models using their platforms.

The real-world effectiveness of any rules might be limited. They wouldn’t prevent China from using open source AI models like Meta’s Llama 3. And without clear guidelines on exports for closed source models, the Chinese government might still get access to sufficiently advanced technology that skirts under the threshold.

At present, most concerns about AI abuse center around disinformation campaigns. There are worries countries like China and Russia are using generative AI to manipulate elections and stoke internal tensions. Even if more advanced models aren’t used in the military, they could help with more convincing fakes.