Google will soon use C2PA metadata to show if images are AI-generated or edited, integrating this feature into key products.
Google seeks to increase transparency for generative AI-generated or modified content. Following its role as a steering committee member of the Coalition for Content Provenance and Authenticity (C2PA) earlier this year, the company has announced plans to integrate technology that identifies whether an image was originally captured by a camera or subsequently modified or generated by AI into key products in the coming months.
A revamped “About this image” feature will soon reveal if an image was created or edited with AI tools and will be available in Google Images, Lens, and Circle to Search.
Google also intends to gradually expand the integration of C2PA metadata into its ad systems and is exploring ways to provide C2PA information on YouTube for camera-captured content.
Battling AI-Generated Fakes
Google’s technology is part of the C2PA, a major initiative addressing AI-generated imagery. C2PA tracks image origins and creates a digital trail across hardware and software.
Collaborating with partners like Amazon, Meta, and OpenAI, Google improved watermarking technology for AI-generated content. The company developed the latest version of Content Credentials, which secures metadata on asset creation and modifications. Due to enhanced validation methods, the tech giant claims this version is more secure and tamper-resistant.
Adoption of the C2PA standard may be limited due to sparse support from camera brands and software apps. Only a few Sony and Leica cameras support it, while Nikon and Canon have pledged to adopt it. The plans of Apple and Google for iPhones and Androids are pending. Adobe’s Photoshop and Lightroom support C2PA, but apps like GIMP and Affinity Photo do not. Despite the potential for Google’s integration to spur broader adoption, most major platforms currently lack labels for this data.
The rise of AI-generated misinformation, including deepfakes, raises concerns about election manipulation by countries like China and Russia. Companies like Midjourney have responded by blocking images of U.S. presidential candidates. Although tech companies and social media apps are adopting C2PA watermarking to label AI-generated media, its effectiveness remains debated due to potential vulnerabilities.