Can Watermarking Effectively Combat the Misuse of AI?

Why Trust Techopedia

One of the challenges posed by the proliferation of artificial intelligence (AI) is combating the spread of content intended for deception and fraud.

In its recent executive order on AI, the White House directed the US Department of Commerce to develop guidance for content authentication and watermarking to clearly label AI-generated content.

The order states:

“Federal agencies will use these tools to make it easy for Americans to know that the communications they receive from their government are authentic—and set an example for the private sector and governments around the world.”

But how effective can this be when there are ways for bad actors to circumvent watermarking and authentication techniques? What approach to regulation is needed?

Key Takeaways

  • The White House’s executive order directs the US Department of Commerce to develop guidance for content authentication and watermarking to combat the spread of deceptive AI-generated content.
  • Watermarking and AI-powered detection tools are potentially the best methods for identifying misleading content, but their effectiveness can be inconsistent, inaccurate — and voluntary.
  • The Content Credentials project, an open-source protocol developed by the Coalition for Content Provenance and Authenticity, offers a solution using cryptography to encode information about the origins of content, including whether it was generated or altered by AI.
  • Companies are going to face many challenges in managing AI content, including issues related to compliance, copyright infringement, and the need for accurate AI detection tools.
  • The executive order is a good step, but there are still hurdles to overcome in implementing tamper-proof methods.

What is the Role of Watermarking in Content Authentication?

The emergence of “deepfakes” and generative AI content that aims to mislead poses a significant technical challenge for organizations and regulators.

Watermarking and AI-powered detection tools are the primary ways of identifying this content, but they are inconsistent and can be inaccurate. ChatGPT developer OpenAI closed its AI classifier in July 2023, six months after its launch, “due to its low rate of accuracy”.

Advertisements

OpenAI said that the tool was “not fully reliable” as a test on a set of English texts showed the classifier correctly identifying 26% of AI-written text as “likely AI-written” while incorrectly labeling human-written text as AI-written 9% of the time.

READ MORE: The Best Generative AI tools

Most watermarking tools embed an invisible identifier in a piece of content to indicate its origin to a watermark detector. Content authentication tools provide information to the viewer about where a piece of content originated, similar to metadata.

One potential solution is the Content Credentials project, developed by the Coalition for Content Provenance and Authenticity (C2PA), a cross-industry standards development organization started by Adobe, Arm, Intel, Microsoft, and Truepic.

Content Credentials is an open-source protocol that uses cryptography to encode information about the origins of a piece of content, including whether it was generated or altered by AI. Users can upload any content to see whether it has been tagged and, if so, how it has been changed over time.

However, as with other forms of watermarking, these credentials are voluntary for content creators and editors to opt into.

“Watermarking is a problematic solution,” Alon Yamin, co-founder and CEO of AI content detection company Copyleaks, told Techopedia.

“What happens with content that is not watermarked, how can you know the exact source of the content? There are so many different types of content – text, images, music, videos – and there is no one solution for all.

 

“The focus specifically on watermarking is a good one but it’s not enough. Watermarking is one option and It’s definitely a step in the right direction. But if you have more sophisticated users that are trying to mask the source of the document and mask plagiarism or copyright infringement there are ways around it.

 

“There are other different detection capabilities that are relevant here if you want to really create a comprehensive answer to these new challenges.”

The Technical Challenge of Watermarking

Copyleaks focuses on AI content detection for text documents, which are more challenging to watermark than images and videos in a tamper-proof way. “It’s important to have solutions that can detect AI regardless of the watermarking status,” Yamin said.

Copyleaks uses generative AI to analyze text to determine whether it was written by an AI tool, paraphrased, or plagiarized. The company is in discussions with the US government to contribute to its standards and the process of identifying and detecting AI-generated content, Yamin said.

“It is going to be a difficult technical task,” attorney Duane Pozza, a Partner at Wiley Rein, told Techopedia.

“Watermarking is one technical method around content authentication that they’ll be working on.

 

“It’s an interesting development because it is directing Department of Commerce efforts to develop standards that can be used throughout the government and provide an example to the private sector. It will be a strong example if they come up with a robust standard.”

The White House order directs the National Institute of Standards and Technology (NIST) to set standards for testing the safety of AI models before public release. The institute’s experience in developing technical standards makes it well-placed to lead the government’s efforts in this area, Pozza noted.

Following the executive order’s release, the Office of Management and Budget (OMB) released a draft policy on using AI for governance, innovation, and risk management in government agencies. This will not only provide an example for companies and other organizations to follow but will direct how government agencies procure AI systems – and products that may be affected by them – from the private sector, Pozza said.

The Challenge of Managing AI Content in Enterprises

For companies, content’s veracity extends beyond the source of images shared on social media to issues surrounding compliance and copyright infringement. For instance, software developers that use generative AI chatbots such as ChatGPT to streamline the process of writing and checking code can inadvertently end up using code that is copyrighted or licensed, which they do not have permission to use.

Yamin said:

“From a copyright IP perspective, this is like an earthquake. An issue like this with code just didn’t exist before, and companies are just now starting to understand these challenges,”

Companies need to ask several questions surrounding the use of generative AI: where it is currently being used in the organization, where it should be permitted, and whether they have the visibility and capabilities to enforce policy regulating its use.

“You’d be surprised that most companies don’t even know where generative AI is being used,” Yamin said. “This is very problematic, for example, if you have a presentation deck that someone is creating in the company that contains proprietary company information. If this deck was created by AI, it means that information was shared with a third party that might share it with others.”

This was made evident earlier this year when South Korean electronics company Samsung banned employees from using ChatGPT after it found that staff uploaded sensitive source code and internal meeting notes to the chatbot while using it to help them streamline tasks.

Companies like Amazon, JP Morgan, and other major US banks have imposed similar restrictions. By default, ChatGPT saves all of its interactions with users and trains its models from the content they input. It now provides the option to disable this function manually but puts the onus on users to do this before they enter information.

Companies also face challenges in ensuring that the AI detection tools they use are as accurate as possible.

“If the content was created by AI, but we’re categorizing it as human content, this is a mistake, but it’s not a huge mistake. On the other side, if a human created the document and we’re saying that it’s AI — that’s a serious accusation, especially for companies working with governmental agencies,” Yamin said.

Watermarking methods and AI detection models alike will need to be fine-tuned to minimize the chances that they are inaccurate or misapplied.

The Bottom Line

The White House’s executive order calling for the development of watermarking and other content authentication methods to combat the misuse of AI-generated content is a step towards regulating the fast-moving sector.

However, there are challenges to introducing robust, tamper-proof methods that consumers, companies, and government agencies need to understand as they handle content in all its forms.

Advertisements

Related Reading

Related Terms

Advertisements
Nicole Willing
Technology Journalist
Nicole Willing
Technology Journalist

Nicole is a professional journalist with 20 years of experience in writing and editing. Her expertise spans both the tech and financial industries. She has developed expertise in covering commodity, equity, and cryptocurrency markets, as well as the latest trends across the technology sector, from semiconductors to electric vehicles. She holds a degree in Journalism from City University, London. Having embraced the digital nomad lifestyle, she can usually be found on the beach brushing sand out of her keyboard in between snorkeling trips.