Microsoft and OpenAI have launched a $2 million Societal Resilience Fund that will help fight the rise of AI deepfakes.
The initiative is meant to educate voters and vulnerable groups about both the capabilities of AI and the risks deepfakes pose. The project will ideally raise awareness and prevent attempts to “undermine democracy,” Microsoft said.
The two companies have already outlined how some of their partners will use their grants. AARP will educate adults over 50 about the “foundational aspects” of AI and a changing landscape. The International Institute for Democracy and Electoral Assistance (International IDEA, meanwhile, will train election agencies, the media, and others to deal with AI and deepfakes.
The Coalition for Content Provenance and Authority (C2PA), which offers a standard for tracing content sources, will run a campaign to push authenticity tools like watermarking. OpenAI is simultaneously joining the C2PA’s lead committee to help shape the standard.
Partnership on AI (PAI) will use its share of the fund to expand its Synthetic Media Framework providing transparency about generative technologies.
OpenAI is taking extra steps beyond its C2PA membership. It’s now taking early applications for access to a tool that detects images created by DALL-E 3. Researchers, journalism nonprofits, and others in the first wave of testing can use a classifier to predict the origins of a picture. The current technology correctly spotted 98% of DALL-E 3 images in a test, but the company wants to better handle image modifications that can thwart checks.
The firm already adds C2PA metadata to images produced by DALL-E 3, and will use that data for the Sora video generator when it becomes widely available.
The fund and OpenAI’s tools could prove important this year. Major elections are being held in the U.S. and other key countries, and there’s concern that nations like China and Russia will use AI and deepfakes to spread disinformation. Greater literacy will theoretically limit these vote manipulation attempts.