YouTube is rolling out AI technology to detect deepfake content and manage unauthorized use of artists’ likenesses.
In a blog post, YouTube announced it was developing new tools to safeguard artists and creators.
The platform’s “likeness management technology” focuses on developing two new tools. The most interesting is a new technology that enables creators, actors, musicians, athletes, and anyone else to detect “AI-generated content showing their faces on YouTube” and manage it.
This helps individuals spot deepfakes using their faces and request the removal of such content. This follows YouTube’s recent policy updates in July, which allowed users to request the removal of AI-generated content simulating their voice or face.
YouTube confirmed that anyone “accessing creator content in unauthorized ways violates [our] terms of service.” It also reiterated that AI-generated content needs to adhere to the platform’s Community Guidelines.
It mentioned its new generative AI tools, such as Dream Screen for Shorts, and the safeguards it has incorporated into these tools to prevent potential misuse.
YouTube’s Efforts Against Deepfakes
YouTube is also developing new technology in Content ID. The synthetic-singing identification technology will allow artists and musicians to spot and manage AI-generated content simulating their singing voices. YouTube’s partners are refining this technology, and a pilot program is planned for early next year.
There’s no word yet on a pilot program or roll-out date for the deepfake technology.
The worrying recent rise in deepfakes ensures this next step for YouTube is another step in the right direction.