China Proposes Mandatory Labels for AI-Generated Content

Why Trust Techopedia
Key Takeaways

  • China will require AI-generated content to be clearly labeled.
  • Platforms must verify AI content and add warning labels.
  • New rules will come into effect after a public comments period.

China’s Cyberspace Administration has introduced draft regulations requiring all AI-generated content to be clearly labeled. 

The proposal announced on September 14 is set to take effect following a public comment period ending on October 14, 2024. 

If adopted, the regulation will standardize the identification of AI-generated material and safeguard the rights of citizens and organizations, China’s internet regulator said.

The draft outlines both explicit and implicit labeling requirements for AI-generated content. Explicit logos would include clear visual or audio cues, such as text prompts or warning symbols visible to users. 

Implicit labels, on the other hand, would involve embedding metadata or digital watermarks within content files, ensuring that the origin and nature of any synthetic content can be traced.

Under the proposed measures, service providers will be obligated to apply these labels to all forms of AI-generated media, including text, audio, images, and video. For text content, clear warnings must appear at the beginning or end of the material. Audio files must include voice prompts or notices to inform listeners, while images and videos will need visible indicators alerting viewers to their AI origin.

Additionally, implicit metadata must accompany AI-generated content, providing information about the content’s attributes, the service provider’s identity, and a unique content identifier. 

Online platforms that host such content will be responsible for verifying these identifiers and ensuring the proper labels are applied. 

The regulator also wants punishments for any entity that may tamper with or remove these mandatory labels. 

Beijing’s proposed regulations align with recent calls in California that will mandate AI model developers to embed watermarking mechanisms in generative AI models. This call, supported by Elon Musk, would ensure that any content generated from generative AI models would be easy to identify through their metadata.