Collateral Damage: Grok AI and the Human Cost of Generative AI

Since the early stages of consumer AI, it’s been a fear that generative AI would collide with fundamental rights.

The Grok AI deepfake scandal may become a canary in the proverbial coalmine of what happens when those warnings are ignored.

News & Analysis
Inside the frame of a mobile phone screen, a woman is depicted walking and looking over her shoulder. Behind her, there is an outdoor scene with the sky, tree, fences, and pavement. A hand is shown grabbing the phone and tapping the record button. On the left side of the phone, a small portion of the same sky, fences, and pavement is displayed in a different scale, representing the world outside the screen. On the right side of the phone, three figures of the same woman are shown against a plain background

Reihaneh Golpayegani / https://betterimagesofai.org / https://creativecommons.org/licenses/by/4.0/

The Grok AI EU scandal began in January 2026 after users discovered that the xAI chatbot, Grok, could generate non-consensual sexualised images of real people — including women, celebrities, politicians, and reportedly minors — using ordinary photos posted online.

The images spread rapidly across X (formerly Twitter), triggering outrage from people, governments and regulators across Europe and beyond.

The European Commission launched investigations while Ireland’s Data Protection Commission separately opened a GDPR investigation into how personal data was being processed.

The scandal has resulted in an important test for how existing regulation - and regulators - can respond to the potential real-world harms of generative AI. It has become a significant moment in AI governance because it changed how regulators framed the problem.

Data Protection Implications

This became a significant moment in AI governance because it changed how regulators framed the problem.

European authorities argued that Grok’s outputs were not merely offensive or harmful content requiring moderation (although we cannot ignore that it did deeply affect people), but potentially unlawful processing of personal and biometric data under the GDPR.

Regulators focused on the fact that Grok could generate sexualised or ‘nudified’ images of identifiable people using ordinary photographs scraped or uploaded online, often without consent.

Italy’s privacy watchdog warned that these practices could amount to serious GDPR violations and even criminal offences, especially where minors were involved. In December 2025, the Italian data protection authority adopted measures around deepfakes, stating that it is:

“necessary not only to verify the existence of a legal basis pursuant to art. 6 GDPR but also one of the conditions indicated by art. 9.2 GDPR.”

Ireland’s Data Protection Commission, the EU’s lead regulator for X, launched a formal investigation into whether xAI had lawfully processed personal data and whether sufficient safeguards had been built into the system to prevent foreseeable harms.

The purpose of the inquiry is to determine whether XIUC )(X) has complied with its obligations under the GDPR, including its obligations under Article 5 (principles of processing), Article 6 (lawfulness of processing), Article 25 (Data Protection by Design and by Default) and Article 35 (requirement to carry out a Data Protection Impact Assessment) with regard to the personal data processed of EU/EEA data subjects.

The UK ICO similarly stated that the case raised serious questions about whether data processed by Grok complied with Article 5(1)(a): 

that it be processed lawfully, fairly, and transparently, as well as whether X considered the risks and safeguards to protect people’s data. The right to control information and how it is disseminated and used is an important part of the right to privacy.

The Investigations into Grok are an important test of whether existing European privacy and digital rights laws can meaningfully constrain generative AI platforms when they infringe privacy and cause harm.

The human consequence

Reuters reported that governments in the UK, France, India, Indonesia, Malaysia, Japan, and the Philippines either launched investigations, issued takedown demands, or temporarily blocked access to Grok entirely. The European Commission then escalated the matter further by opening a Digital Services Act (DSA) investigation into X, arguing the company may have failed to conduct proper risk assessments before rolling out Grok’s image-generation features in Europe.

Henna Virkkunen, Executive Vice President of the European Commission suggested X may have treated the rights of women and children as ‘collateral damage’ in its rapid deployment of AI tools. The controversy also triggered wider political debate over whether AI companies should face direct liability for foreseeable misuse of their systems, with the UK moving to criminalise certain forms of AI-generated intimate imagery and considering bans on ‘nudify’ applications altogether.

What we want to see

Generative AI is still a relatively novel phenomenon and has had limited testing against existing data protection frameworks. At times, those frameworks may need to be adaptable to remain relevant and applicable to real-world scenarios.

These frameworks play an important role in regulating AI. In countries that don’t have an AI-specific law, data protection laws are often the only legislative measure in place to constrain it. These investigations into GrokAI will be an important test of whether they can effectively constrain it and guard against the harm posed by generative AI.

We hope they are up to the task.