OpenAI’s ChatGPT could be breaking GDPR because it sometimes fails to provide accurate information about individuals.
The privacy organization Noyb brought this issue up in a complaint to the Austrian Data Protection Authority (DPA).
This case highlights serious concerns about how artificial intelligence (AI) affects data.
🚨 noyb has filed a complaint against the ChatGPT creator OpenAI
OpenAI openly admits that it is unable to correct false information about people on ChatGPT. The company cannot even say where the data comes from.
Read all about it here 👇https://t.co/gvn9CnGKOb
— noyb (@NOYBeu) April 29, 2024
The main problem with the complaint against OpenAI relates to GDPR compliance, especially regarding data accuracy and the rights of individuals. The GDPR is essential in EU privacy and human rights law and demands high standards for handling personal data, which must be:
- Accurate: All personal data processed must be kept accurate and up to date.
- Lawful and transparent: The handling of data should be lawful, fair, and clear to the person it concerns.
OpenAI has admitted to facing challenges in ensuring that ChatGPT meets these requirements. The AI system, which learns from vast training data, might produce inaccurate or made-up information, often called “hallucinations.” This problem goes against the GDPR’s need for data accuracy, and it’s especially troubling when it involves personal data.
The complaint by Noyb to the Austrian DPA emphasizes these issues:
- Inaccuracies in AI outputs: OpenAI’s system might create and keep incorrect data about people, breaking the GDPR’s rule on accuracy.
- Lack of transparency: It’s also often unclear how ChatGPT generates certain information.
This situation shows a wider challenge. AI tools like ChatGPT are used more with personal data, so they need to be strictly in line with legal standards like the GDPR. This necessity is more pressing as the impact of AI on personal rights and freedoms increases.
The complaint challenges OpenAI’s adherence to the GDPR and shows a significant need for the AI industry to rethink and possibly redesign its technologies to comply with data protection laws legally and ethically.
AI Regulatory Oversight
European DPAs have taken significant actions in response to the challenges posed by AI technologies like ChatGPT, reflecting serious concerns about privacy and data protection compliance. One prominent example is the concerns raised by the Italian DPA after a detailed investigation.
Key points from the regulatory action include:
- Temporary restrictions: The Italian authority, Garante, placed a temporary ban on ChatGPT’s data processing in Italy due to concerns about the AI’s tendency to produce inaccurate data, known as “hallucinations.”
- Legal basis for data processing: A key issue from the investigation is the unclear legal grounds OpenAI used for gathering and using personal data to train its AI models. Initially, OpenAI claimed “performance of a contract” as its basis, which the Garante rejected. Now, the focus is on whether the alternatives—consent or legitimate interests—can be justified, as they need clear approval from individuals or must balance the company’s interests against individual rights and freedoms.
- Potential for broad implications: If the Garante decides that legitimate interests are not a sufficient basis for such extensive data processing, this could set a precedent that affects not just OpenAI but also other tech firms using similar AI technologies. This issue is crucial, especially considering past decisions by the EU’s highest court on matters similar to Meta’s data practices.
The implications of these regulatory measures are significant and suggest potential future directions for AI development and regulatory frameworks:
- Increased scrutiny: AI developers might face a more rigorous examination of how their systems manage personal data, focusing on mechanisms for correcting inaccuracies and enhancing user transparency.
- Guideline adjustments: There could be a shift towards more specific guidelines addressing AI’s unique challenges, such as generating false information.
- Innovation in compliance tools: The industry may innovate in creating tools that help ensure compliance, like better data tracking and automatic error correction mechanisms.
These changes indicate a growing recognition among regulators of the need to update and refine legal frameworks to manage the incorporation of AI technologies into society. This is essential for keeping public trust and ensuring that AI advancements benefit society without jeopardizing individual rights.