Adobe Users Fume Over Updated Terms of Use Granting Access to Content

Why Trust Techopedia
Key Takeaways

  • Users have blasted Adobe over its newly enforced terms of use.
  • They're concerned the terms might give Adobe control over their content, including for AI training.
  • Adobe says it only has access for the sake of functionality.

Outrage has erupted among Adobe’s user base following the company’s recent update to its terms of use.

The new terms, which were quietly rolled out in February but only enforced in the past few days, seemingly granted Adobe sweeping rights to access and analyze users’ content, including projects covered by non-disclosure agreements (NDAs).

The clause in dispute states that Adobe can “access your content through both automated and manual methods,” defining “content” as any text, audio, video, images or other materials created using Adobe’s services and software. Users also believe the terms give Adobe the explicit right to employ techniques like machine learning to analyze this user-generated content, fuelling speculation that the company intends to use it for training its AI models.

Adobe’s move to lock popular applications like Photoshop and Substance 3D until users consent to the new terms has only fanned the embers of discontent. While users can technically opt out of content analysis, the terms reserve Adobe’s right to override such preferences in “certain limited circumstances.”

Some users have already taken to X to share their discomfort over the policy, with one of them calling on users to cancel their subscription until Adobe changes the terms of use. 

When reached for comment by Techopedia, an Adobe spokesperson pointed to a blog post clarifying the terms. The company said it only needed to access content for the sake of operating its software and services, and to comply with the law. It stressed that it doesn’t train Firefly generative AI on user content, and doesn’t claim ownership of users’ work.

This incident is the latest in a series of public relations crises faced by major software-as-a-service (SaaS) providers over forced term updates, often driven by the integration of generative AI capabilities. Microsoft, Slack, and Dropbox have all faced backlash for similar moves in recent months.

In one of its latest warnings to businesses and AI developers, the US Federal Trade Commission (FTC) cautioned that secretly “changing your terms of service could be unfair or deceptive.” The FTC also noted that sharing consumers’ data with third parties or AI training systems, and only informing consumers of this change through a retroactive amendment to the terms of service, could attract sanctions.

While it’s clear that data is crucial for any company wanting to add AI to its product, companies must also find a way to protect users’ privacy through transparent practices and established policies.