YouTube Implements AI Disclosure Requirement for Creators

Key Takeaways

  • YouTube demands AI content disclosure from creators.
  • New tool for marking AI-generated content is in the Creator Studio.
  • Enhanced labeling for AI videos could help prevent viewer deception.

YouTube has obliged its creators to inform viewers about the use of AI in producing content that mimics real-life scenarios.

A feature within Creator Studio will now compel users to indicate whenever they employ altered or synthetic media. This includes AI-generated content to craft representations that could be mistaken for genuine people, locations, or occurrences.

Snapshot from YouTube Creator studio

The move is designed to enhance transparency and ensure that the increasingly sophisticated AI-generated content does not mislead viewers. It comes amid concerns about technology’s impact, particularly with the looming U.S. presidential election.

The initiative was first teased in November as part of YouTube’s broader strategy to integrate new AI guidelines. It differentiates between obviously fictional content, like animations of improbable scenarios, and that which realistically portrays individuals or events using AI.

For example, videos that manipulate someone’s appearance or voice or alter footage to misrepresent events will require clear disclosure to the audience.

YouTube plans to enforce this rule by labeling most affected videos within their description area, with more conspicuous labeling for content related to sensitive subjects like health or news directly on the video interface.

The labeling feature will be gradually introduced across YouTube’s platforms, starting with its mobile app and then followed by desktop and TV versions.

YouTube is also considering steps to ensure compliance, including the possibility of applying labels to videos from creators who neglect to do so, especially when the content could be misleading or confusing.

Most recently, the popular AI image generator Midjourney decided to prevent users from creating fake images of President Joe Biden and former President Donald Trump. This move comes amidst concerns over the potential abuse of generative AI tools for political misinformation amid the upcoming U.S. election.