Overview
Contact Attorney: Molly Condon Wells
Injury: Non-consensual pornographic deepfake images created through the Grok AI tool
Defendant: xAI, the developers of the Grok AI chatbot
Practice: Survivor Advocacy
Latest Update: January 27, 2026 – Grok faces class action.
A class action suit has been filed on behalf of Jane Doe, a U.S. woman whose photo was transformed into an explicit image by Grok.

Photo © Pixabay
Case Team
Principal Attorney: Molly Condon Wells
Supporting Attorney: Alexandrea M. Messner
Paralegals: Mirena Fontana, Morgan Kapping
Legal Assistant: Leena Yaqub
Fighting for the Victims of Non-Consensual Deepfake Images Generated by Grok AI
The Survivor Advocacy team at Wallace Miller is investigating pornographic deepfake images created using a Grok AI tool. Beginning in late 2025, Grok allowed users to create nonconsensual pornographic images of people through its AI image-editing service. These images have proliferated on social media platforms, especially X (formerly Twitter).
Generative AI and non-consensual deepfake images are new areas of law, and relevant regulations are still developing. In this complicated area of legal practice, our firm fights to make sure that victims who have been violated by these edited images have a chance to tell their stories and demand compensation. Our team is dedicated to holding the companies that enabled and encouraged these abuses accountable and to making sure that Grok, xAI, and other platforms understand that this is not acceptable.
Injury
Grok AI tool allows X users to create explicit AI-generated images
In December 2025, the AI tool Grok rolled out a new chatbot feature that can “undress” people and create explicit deepfake images. When someone uploads a photo on the social media platform X (formerly Twitter) users of the service can tell Grok to put the individual in a bikini or underwear, alter their position, and otherwise edit the picture to make it more sexually explicit. The tool has been used to strip people of their clothes, modify bodies, place people in degrading positions, add offensive tattoos and other modifications, cover people in sexual fluids, and more.
After the service was launched, X.com was flooded with AI-generated, sexually explicit images of individuals who did not consent to their photos being used in this way. Some of these edited images were of celebrities and public figures, while others were of private citizens, many of whom had no idea that edited photos of them were circulating online. Although these exploitative images were created across gender lines, the majority of these edits targeted young women, and some specifically targeted children.
The Center for Countering Digital Hate (CCDH) calculated that Grok produced 3 million sexualized images just in the 11-day period between December 29, 2025 and January 8, 2026. This number includes about 23,000 explicit images of children. While some of these images have been taken down, many remain on the platform, and X has refused to remove non-consensual materials in some cases.
Plaintiff & Defendant
What is the Grok chatbot?
Grok is a generative artificial intelligence model developed by xAI, a company run by Elon Musk. Integrated into Elon Musk’s X, the chatbot is available to users and draws some of its information from the platform itself.
The chatbot has drawn controversy for biased, incorrect, and offensive information, including praising Hitler. In 2025, it introduced a “spicy” mode that allowed users to create sexual images. Now, Grok can be used to easily “undress” people across X without their consent.
Public outcry over non-consensual explicit images
Since Grok launched this feature, millions of adults and children have been victimized through sexually explicit images created without their consent. This violation of privacy, which targets women and children, can cause severe emotional distress and have permanent ramifications for individuals’ lives, relationships, and careers.
Posted publicly, these images are not only humiliating and sexually exploitative, but used as a way to abuse, demean, undermine, and silence people, particularly women. The ease of generating intimate images through Grok allows people to bully others off the internet with the threat of public sexual humiliation.
Landscape
Experts warned Musk about direction of deepfakes
For years, industry experts and watchdog organizations have warned Musk and X of the possibility of nonconsensual explicit content created by AI. In 2025, a coalition of child safety groups warned against the potential of Grok’s image generation features being used to generate child sexual abuse material (CSAM).
Despite these warnings, experts say that xAI departed from industry-standard safeguards. Grok allowed abusive images to be included in its training material and failed to ban users who requested illegal content. While “undressing” programs are not new, they have been largely confined to darker corners of the internet. Now, Grok’s service makes it easy to generate public non-consensual images of someone within a few clicks.
Artificial intelligence and civil litigation
The laws and regulations around deepfakes and AI usage are still evolving. While laws have been passed in recent years establishing penalties for non-consensual internet deepfakes, more robust legal protections are needed. It can be difficult to go after the individual who generated the fake image, as there are limited criminal options to pursue justice.
However, civil cases allow people who have been harmed by the actions of companies like xAI, X, and Grok to pursue financial compensation for the harm they caused. Our team believes that Grok should have had better protections in place for the use of this AI tool, and we plan to hold them accountable for the decisions they made that led to the violation of millions of women and children.
Timeline
January 27, 2026 – Grok faces class action.
A class action suit has been filed on behalf of Jane Doe, a U.S. woman whose photo was transformed into an explicit image by Grok.
January 26, 2026 – Official EU investigation opened.
The European Union (EU) opens a formal investigation into X under the Digital Services Act, joining regulatory actions around the world.
January 26, 2026 – “Undressing” tool still active on X.
Despite company statements, Grok’s “undressing” tool continues to function on the platform.
January 17, 2026 – Mother of Musk’s son sues over abusive generated images.
Ashley St. Clair, the mother of Musk’s son, sues xAI over explicit images generated by Grok. She alleges that after reporting the images and requesting they be taken down, X stated they didn’t violate its policies and retaliated against her by demonetizing her account.
January 14, 2026 – Grok adds additional technical restrictions to its image editing feature.
January 12, 2026 – The UK announces an investigation into Grok’s generation of sexualized images of children.
January 12, 2026 – Malaysia temporarily blocks access to Grok in the country. Access is restored on January 23.
January 10, 2026 – Indonesia becomes the first country to ban the Grok chatbot over explicit images generated by artificial intelligence.
January 8, 2026 – Grok limits its image generation and editing feature to paying subscribers only, in a move British PM Keir Starmer’s office called “insulting.”
December 29, 2025 – Grok floods X with AI-generated sexually explicit photos of women and minors as the popularity of its image editing feature explodes.
December 2026 – Musk announces a new Grok feature that allows users to edit photos.
November 2023 – xAI launches Grok.
2022 – Elon Musk buys Twitter and renames it X.
Contact
Wallace Miller Survivor Advocacy
Our Survivor Advocacy team was created to speak up for people who have been harmed by influential organizations. We fight for compensation for each individual client and advocate to change the system that allows powerful people to abuse others without repercussions.
The creators of Grok deliberately allowed it to function as a tool of harassment, humiliation, and violation of consent. If you or someone you know has been the victim of a AI-generated explicit image on X, reach out to our team at (331) 425-8022 or via our online case evaluation. In a free and completely confidential consultation, we can discuss your case and help you determine the best path forward.
