X Trains Grok on Your Tweets: How to Disable It

Why Trust Techopedia
Key Takeaways

  • X is training its AI chatbot Grok on user data by default without explicit consent.
  • The setting to disable Grok AI training is hidden and only available on the web version of X, not the mobile app.
  • Tech companies are increasingly using customer data to train AI models without proper consent.

X has quietly introduced a new setting that allows the platform to use your data for training its AI chatbot, Grok.

The setting, which permits X to utilize users’ posts, interactions with Grok, and other data for AI training purposes, was first shared on X by a user in the early hours of July 26.

 

According to the X user EasyBakedOven, this feature is not easily accessible through the mobile app, meaning users can only turn off the settings on the web.

While a help center page on X explains this feature and how users can opt out, the move has left many X users worried as the setting is enabled by default without their consent or formal company announcement. 

How to Stop Grok From Using Your X Posts For AI Training

To disable this data-sharing feature, users should follow these steps:

  1. Log in to X.com via a web browser
  2. Click on the settings icon
  3. Navigate to “Settings & Privacy”
  4. Select “Privacy & Safety”
  5. Scroll to the bottom and click on “Grok”
  6. Uncheck the box that allows data usage for training
How to switch off Grok AI data sharing | Source: Twitter
How to switch off Grok AI data sharing | Source: Twitter

What Is Grok?

Grok, launched by Elon Musk’s AI company, xAI in late 2023, is Musk’s answer to popular AI chatbots like ChatGPT, Claude and Google Gemini. Despite Musk’s efforts to improve Grok with the release of Grok-1.5,  it’s yet to pose a real competition to other leading models. Earlier in the year, Musk decided to make Grok an open-source AI model to further raise its stakes. 

The decision to give Grok access to X users’ data by default is similar to a controversy that surrounded Slack last May. The workplace communication platform faced criticism for using customer data to train its AI models by default without obtaining explicit user consent. This practice raised eyebrows about data privacy and the ethical implications of using sensitive workplace communications for AI training purposes. 

While the backlash forced Slack to reconsider its approach and provide clearer opt-out options for users, this growing trend to opt-in users for data collection purposes by default highlights the insatiable appetite of AI models for training data.

It also points to the ongoing tension between AI innovation and user privacy, raising questions about the extent to which companies should be allowed to use personal data for AI development without the consent of their users.