European Union regulators have launched an investigation into Google’s advanced AI model, PaLM2, over concerns related to data privacy.
The inquiry, announced by Ireland’s Data Protection Commission, comes as part of a broader effort by EU regulators to scrutinize how large AI systems handle personal data.
Google’s PaLM2 Faces Privacy Scrutiny Amid Growing Regulatory Pressure on AI Systems
According to a press release, Ireland’s Data Protection Commission, the regulator responsible for overseeing Google’s compliance with GDPR, is investigating Google’s PaLM2 model.
The inquiry focuses on whether Google properly evaluated the risks involved in PaLM2’s data processing, given the company’s European headquarters in Dublin.
The investigation will determine if Google considered the potential impact on the rights and freedoms of EU citizens, which is a core requirement under GDPR.
PaLM2, a large language model that powers services like Google’s AI-based email summarization, relies on vast amounts of data for its functionality.
Google announces PaLM 2, with improved multilingual, reasoning, and coding capabilities
PaLM 2 features improved “multilinguality” as it has been “more heavily trained on multilingual text” across over 100 languages. This results in a “significantly improved” ability to… pic.twitter.com/l0xHlyLhh1
— AK (@_akhaliq) May 10, 2023
The investigation raises questions about how personal data is processed and whether it poses significant risks to users’ privacy.
However, Google has yet to respond to requests for comment on the investigation, but the inquiry highlights ongoing concerns about how AI models handle sensitive data.
This is not the first time AI models have come under regulatory scrutiny in Europe.
Recently, Ireland’s Data Protection Commission forced Elon Musk’s X platform (formerly Twitter) to halt the processing of user data for its AI chatbot, Grok.
The social media company only complied after the commission took legal action in an Irish court.
Similarly, Meta Platforms has also paused plans to use European user content to train its AI systems following pressure from the same regulatory body.
Additionally, Italy’s data privacy regulator temporarily banned OpenAI’s ChatGPT in July last year due to privacy concerns, only lifting the ban after the company met specific data protection requirements.