Earlier this month, Italy’s Data Protection Authority (the “Garante”) fined Luka Inc., the US company behind the Replika AI chatbot, €5 million (about $5.6 million) for serious data privacy violations.
Replika is a popular app that lets users create a personalized AI “friend” or avatar to chat with for emotional support.
Is such a human-machine interaction harmless, or does it inflict severe data privacy risks?
Key Takeaways
- Italy’s €5M fine against Replika signals growing regulatory scrutiny over AI data practices.
- Key AI privacy concerns in 2025 include data misuse, profiling, surveillance, and lack of transparency.
- The EU is enforcing GDPR now and the AI Act to target high-risk and non-compliant AI systems.
- US regulators like the FTC are warning AI firms against misusing personal data, but they lack federal laws.
- Companies must bake in privacy protections from the start or risk fines, bans, and reputational damage.
What Did Replika Do Wrong?
The Italian authorities found multiple GDPR violations. First, Replika “lacked a legal basis” for processing users’ personal data. In other words, the company hadn’t identified a valid, lawful ground (such as user consent or contractual necessity) to justify collecting and using people’s chat data. The Garante also noted Replika’s privacy policy was woefully inadequate and failed to clearly inform users about data practices.
Second, Replika had no age verification in place, meaning children could access the AI chatbot despite the company’s claims that minors were not allowed. This absence of age checks “placed children at particular risk,” the regulator emphasized, violating both Italy’s rules and the EU’s GDPR. Even after Replika later added an age-gating mechanism, officials found it still deficient in many respects.
As a result of these findings, Italy not only levied the €5M fine but also ordered Luka Inc. to get its AI data processing in line with the law.
Notably, the Garante launched a separate investigation into Replika’s underlying AI model training. This probe will examine whether Replika’s use of personal data to train its generative language model complies with EU privacy requirements.
Key AI Privacy Concerns in 2025
Replika’s violations are just a small part of the bigger picture in 2025. Regulators and users alike are worrying about many fronts:
- Data misuse and excessive collection: AI systems feed on massive datasets – often personal or sensitive information – to learn and improve. This raises alarms about data being collected or used without proper consent or beyond what users expect. Many AI platforms have been caught quietly scraping personal data or repurposing it in ways users never agreed to.
- Surveillance and biometric data abuse: Advanced AI-powered surveillance tools are becoming more widespread, from facial recognition cameras to emotion-tracking algorithms. These systems often rely on biometric data (faces, fingerprints, voice, etc.) that, if misused or breached, can cause serious harm – you can’t change your face like you can a password. There is growing concern that AI is enabling a new wave of mass surveillance, sometimes operating without people’s knowledge or consent.
- Profiling and bias: AI algorithms can rapidly analyze and categorize people, for instance, assessing someone’s personality from social media or evaluating job applicants with AI. This automated profiling can be highly intrusive, especially if done opaquely.
- Lack of transparency (“black box” AI): Many AI models operate as black boxes. This means they process vast amounts of data and make decisions or predictions in ways that are not transparent to users or even developers. This opacity makes it hard for individuals to know what data was used and how decisions are made, undermining accountability.
- Data security and leaks: AI’s hunger for data can increase the attack surface for breaches. Many times, these datasets become big targets for hackers, and the risk of AI models memorizing sensitive information from training data rises.
In short, the explosive growth of AI is spotlighting longstanding privacy issues, often magnifying them.
As Daniel J. Solove, renowned professor at George Washington University Law School, noted, AI isn’t introducing entirely new privacy problems so much as supercharging existing ones:
“AI starkly highlights the deep-rooted flaws and inadequacies in current privacy laws.”
Europe’s Privacy Regulations: GDPR Enforcement and the Coming AI Act
Europe has been at the forefront of confronting AI-driven privacy risks, and the Replika fine fits into a broader EU trend of aggressive enforcement. Under the EU’s flagship General Data Protection Regulation (GDPR), data regulators have made clear that AI systems must follow the same rules as everyone else.
Italy’s Garante is known as one of the EU’s most proactive watchdogs on tech – it even briefly banned ChatGPT in 2023 for GDPR violations and later hit OpenAI with a €15 million fine when the chatbot was reinstated.
In Replika’s case, the Garante likewise showed that “AI cannot be above the law.” The lack of consent, transparency, or protection for children’s data was met with a tough penalty. Other European authorities have also launched inquiries into generative AI services. The message from the EU is that if an AI mishandles personal information, GDPR can and will be enforced.
While using existing law, Europe has also created the Artificial Intelligence Act (AI Act) – a sweeping regulation specifically for AI.
The AI Act imposes a risk-based framework on AI developers and deployers. It outright bans certain “unacceptable” AI practices, like systems that involve social scoring (ranking people’s trustworthiness) or indiscriminate mass surveillance tech, as these are deemed to violate fundamental rights.
For other AI uses classified as “high-risk” (say, AI in job hiring, credit scoring, or medical devices), the law mandates strict requirements for transparency, human oversight, and rigorous risk assessments before deployment.
Developers of general-purpose AI models (such as large language models behind chatbots) will also have to meet new compliance standards by 2027, ensuring things like data governance, reporting of capabilities and limitations, and measures against abuse.
The Replika fine underscores why EU lawmakers are pushing these new rules. It’s a case of a feel-good AI app turning into a privacy headache due to basic GDPR failures. European regulators want to prevent such scenarios on a larger scale as AI adoption accelerates.
AI Privacy in the US & Worldwide
Outside of Europe, the conversation around AI and privacy is likewise intensifying, though the regulatory response has been slower and patchier.
The US: Caught Between Innovation & Privacy Gaps
In the United States, there is currently no single, comprehensive federal data privacy law, let alone an AI-specific law akin to the EU’s approach. Instead, US authorities are trying to apply existing consumer protection and privacy rules to AI.
The chief watchdog here is the Federal Trade Commission (FTC), which has openly warned AI companies that they must uphold privacy commitments or face consequences.
The FTC has already taken action against firms that misused personal data for AI development. For example, it penalized a company that had secretly used people’s photos to train face recognition AI despite promising not to – and even forced the deletion of algorithms derived from that ill-gotten data.
In early 2024, FTC Chair Lina Khan cautioned tech firms that using customers’ data to build AI models without proper notice and consent is unacceptable. Companies, she said, need to tell consumers and get permission if they repurpose data collected for one purpose (say, social media posts or chat logs) to fuel AI training for a different purpose. Critically, Khan hammered home that “firms cannot use claims of innovation as cover for law breaking” – a pointed reference to some tech companies’ tendency to push the envelope and ask forgiveness later in the name of AI advancement.
That said, US regulators are also grappling with how to foster AI innovation without trampling privacy. There’s a bit of a policy split: officials like Khan advocate strict enforcement of data protections in AI, whereas others caution against stifling development.
In April 2025, an FTC commissioner, Melissa Holyoak, remarked that requiring explicit user consent for all AI data uses could “hamper” smaller companies and that the agency should avoid overregulation that might hinder competition.
This reflects a broader debate in the US: how to encourage AI-driven economic growth while avoiding a privacy free-for-all.
So far, the US approach relies on patchwork measures – enforcing truth-in-advertising and data security laws, updating sector-specific rules (for instance, clarifying that health privacy law covers AI health apps), and issuing AI ethics frameworks as guidance.
Global Efforts: A Patchwork of AI Privacy Rules
Globally, there’s a growing acknowledgment that AI and privacy must be addressed hand-in-hand.
- Canada, for example, has been working on an Artificial Intelligence and Data Act as part of a bill updating its privacy laws, aiming to impose requirements on AI systems to prevent discriminatory or privacy-invasive outcomes.
- China introduced regulations in 2023 for generative AI that, interestingly, mandate things like data labeling and security reviews.
- Other countries, such as Japan, India, and Brazil, are exploring their own AI guidelines or principles.
- International bodies are chiming in: UNESCO issued global AI Ethics recommendations, and the OECD countries agreed on AI principles that include respect for privacy and human rights.
The challenge is that approaches vary widely. This patchwork raises concerns about regulatory gaps and uneven protections.
Nevertheless, the fact that Italy’s action against Replika made headlines worldwide shows that AI privacy issues are truly global.
Whether it’s a European user worried about an American chatbot or an Asian government worried about face-scanning AI in public, the underlying tension is the same: AI technology is racing ahead, and rules to govern its use of data are struggling to keep up.
The Bottom Line
The year 2025 finds society at a crossroads: we have AI systems being deployed at breakneck speed – from chatbots like Replika to AI tools in finance, healthcare, and beyond – yet our data protection frameworks are straining to catch up. The clash between rapid AI deployment and lagging privacy safeguards is becoming more evident.
Tech companies often urge that innovation should not be stifled, but regulators and privacy advocates respond that fundamental rights can’t be an afterthought. The Replika case encapsulates this tension. The app offering emotional AI companionship was undoubtedly innovative, but it rolled out without basic privacy guardrails like a lawful basis for data use or a way to keep kids safe.
To authorities like the Garante, this was unacceptable regardless of how novel the service was. Privacy by design is increasingly seen as not just a nice-to-have, but a necessity if AI is to be sustainable.
The hope among many experts is that by enshrining privacy and human-centric design in AI development, we can enjoy the benefits of these powerful tools while keeping our fundamental rights intact – a balance that will be crucial as we move into the next stage of AI development.
FAQs
Why did Italy’s data watchdog fine Replika’s developer $5.6 million?
How did Replika violate EU privacy standards according to regulators?
What specific data privacy issues led to the Italian fine on Replika?
What are the biggest AI privacy concerns in 2025?
Is there any global consensus on regulating AI and data privacy?
References
- Artificial Intelligence and Privacy by Daniel J. Solove (Download.ssrn)
- General Data Protection Regulation (GDPR) – Legal Text (GDPR-info)
- EU Artificial Intelligence Act | Up-to-date developments and analyses of the EU AI Act (ArtificialIntelligenceAct)
- NAI Companies: Uphold Your Privacy and Confidentiality Commitments (FTC)
- A few key principles: An excerpt from Chair Khan’s Remarks at the January Tech Summit on AI (FTC)
- FTC’s Holyoak says agency will avoid ‘excessive regulation’ of AI development (TheRecord)
- The Artificial Intelligence and Data Act (AIDA) – Companion document (Ised-isde.canada)
- China’s AI Regulations and How They Get Made (CarnegieEndowment)
- Recommendation on the Ethics of Artificial Intelligence (UNESCO)
- AI principles (OECD)