Grok, the AI tool integrated into Elon Musk’s X platform, has come under renewed global scrutiny after users began creating sexually explicit, nonconsensual images from ordinary photos posted online.
The abuse first emerged after Grok Imagine launched in August 2025 and escalated sharply in the final days of December, surging on New Year’s Eve as manipulated images spread widely across the platform. The scale of the misuse has fueled public anger globally and renewed concerns over platform safety, user protection, and accountability.
The latest backlash builds on criticism from 2025 surrounding Grok Imagine and its “spicy” mode, which was designed to allow sexually explicit outputs. Critics have pointed out that the feature was explicitly introduced to generate suggestive or explicit visuals, raising long-standing questions about weak safeguards and limited age verification rather than accidental misuse.
Grok previously faced regulatory action in Türkiye in 2025 and the renewed wave of complaints has reopened debate over digital privacy and consent.
The current controversy traces back to August 2025, when xAI rolled out Grok Imagine, an image and video generation tool tied to its chatbot on X.
The feature introduced several preset modes for visual generation, including an option labeled “spicy,” which allowed sexually suggestive and explicit outputs.
Access to Grok Imagine came through a paid subscription, priced at around $30 per month, and enabled users to upload an existing image and turn it into short videos or altered visuals. From the outset, the “spicy” mode drew attention because it placed fewer restrictions on nudity and sexual content than comparable tools from other major AI developers, as reported by Sify and The Verge.
Concerns escalated after testing by journalists showed that the system could generate explicit images of real people with minimal prompting. In one widely cited example, a reporter from The Verge found that selecting the “spicy” preset alone was enough to produce uncensored sexualized images of celebrities, even without directly asking the system to remove clothing.
Another major point of criticism centered on age checks and safeguards. Grok Imagine required users only to enter a date of birth, without further verification, a system that fell short of stricter age verification rules introduced in parts of Europe in mid 2025. Despite xAI’s acceptable use policy banning pornographic depictions of real people, these restrictions were not consistently enforced in practice.
At the time of launch, Elon Musk publicly promoted the rapid adoption of the tool, saying more than 34 million images were generated within the first 48 hours, as reported by The Verge. Critics said that scale, combined with the deliberate design of a mode aimed at explicit content, suggested that the risks were built into the product rather than emerging from isolated misuse.
By the end of December 2025, concerns around Grok’s image tools had shifted sharply from celebrity deepfakes to the targeting of ordinary users.
In the final days of the year, users on X began sharing examples of Grok being used to alter photos of women and children into sexually explicit images without consent.
The trend accelerated on New Year’s Eve and spread rapidly across the platform. Users issued direct prompts instructing Grok to modify everyday photos into explicit or sexualized content, which was then circulated widely. Victims had no involvement in the process and, in many cases, were unaware their images were being manipulated until the content began circulating publicly.
Cyber safety experts and women’s rights advocates described the practice as a form of AI-enabled sexual abuse rather than online trolling.
“Why are we asking or expecting victims to be careful at all?” cyber security expert Ritesh Bhatia asked CNBC-TV18. They argued that responsibility lies with platforms that allow such prompts to be executed, saying "This isn’t about caution; it’s about accountability. When a platform like Grok even allows such prompts to be executed, the responsibility squarely lies with the intermediary."
Legal experts also warned that the misuse carries serious consequences. Cyber law specialist Adv. Prashant Mali said "I feel this is not mischief—it is AI-enabled sexual violence." He noted that the creation and circulation of AI-generated morphed images can amount to sexual violence under existing law, particularly when children are involved. He added that the argument that such content is “just AI” would not hold up under legal scrutiny.
Despite reports that X limited access to some of Grok’s media features, the manipulated images continued to circulate. Women users in several countries responded by deleting photos or limiting their online presence, reflecting growing fear that any publicly shared image could be weaponized through the tool, according to CNBC-TV18.
This escalation marked a turning point in the backlash against Grok. What began in 2025 as criticism focused on celebrity deepfakes and explicit presets evolved into a broader debate about whether generative AI systems can be deployed safely when ordinary users, including minors, can be targeted with minimal effort and little oversight.
Concerns surrounding Grok intensified in September 2025 after reporting based on interviews with current and former workers at xAI suggested that sexually explicit material involving minors was appearing during internal moderation and review processes.
According to reporting by Business Insider, employees said they regularly encountered AI-generated content linked to child sexual abuse while working on Grok.
Several workers described the volume of disturbing material as overwhelming. Former employees said that they quit after repeated exposure to explicit images, videos, and audio files involving minors. One worker said the experience made them physically ill, while another described the scale of the material as shocking.
The reports also raised questions about xAI’s handling of mandatory child safety reporting. The National Center for Missing and Exploited Children received more than 440,000 reports of AI-generated child sexual abuse material by mid 2025, according to figures cited by Futurism.
Despite that sharp increase across the industry, xAI did not submit any reports to the organization for 2024, Futurism reported, even as competitors acknowledged similar risks and filings.
Child safety experts warned that allowing sexually explicit outputs without strict boundaries increases the risk of harmful content slipping through. Stanford University tech policy researcher Riana Pfefferkorn said that systems without clear prohibitions create larger grey areas that are harder to control, according to Futurism. NCMEC officials also stressed that platforms must take aggressive measures to ensure no material involving children can be generated or circulated.
The whistleblower accounts added weight to criticism that Grok’s problems extend beyond misuse by individual users. Instead, they pointed to deeper questions about moderation capacity, reporting obligations, and whether existing safeguards are sufficient when generative AI tools operate at a large scale with limited oversight.
Alongside the surge in nonconsensual sexual imagery, Grok has also drawn attention for how easily its image tools can be used for political manipulation.
In recent weeks, users on X have shared altered images of political figures created through simple text prompts, raising fresh concerns about misinformation and abuse.
Examples circulating on the platform include prompts instructing Grok to remove or alter figures such as former US president Donald Trump or Israeli Prime Minister Benjamin Netanyahu based on highly charged accusations.
In these cases, users uploaded existing photos and directed the tool to modify or erase individuals using politically loaded language. While some of the altered images were framed as satire by users, critics warned that the ease of manipulation blurs the line between commentary and disinformation.
Similar uses have appeared in Türkiye, where Grok has been applied to images of domestic political figures, including the founder of the Republic of Türkiye Mustafa Kemal Ataturk and President Recep Tayyip Erdogan.
Turkish users have shared examples showing Grok responding to commands that alter appearance, clothing, or presence in images tied to political posts. The issue has widened debate beyond sexual abuse to include the risks of AI-driven visual manipulation in politically sensitive contexts.
Digital rights advocates argue that these practices expose gaps in content moderation rather than isolated misuse. The concern is not only that images can be manipulated, but that they can be produced and circulated rapidly, often without clear labels or context, making it difficult for viewers to distinguish altered visuals from authentic material.
The renewed backlash against Grok has once again drawn attention to the regulatory gaps surrounding generative AI tools, particularly when systems embedded in major social media platforms can generate and circulate manipulated images at scale with limited oversight or clear lines of responsibility.
In Türkiye, the controversy carries particular weight given that Grok was blocked in September 2025 amid concerns related to content safety and digital regulation. The latest complaints, which now encompass nonconsensual sexual imagery as well as politically motivated image manipulation, have reopened debate over digital privacy, consent, and the adequacy of platform-level safeguards.
Despite mounting scrutiny across multiple countries, neither X nor xAI has provided a detailed public explanation outlining how Grok’s image generation tools are being restricted, monitored, or technically altered in response to the recent surge in abuse, leaving regulators and users to rely largely on fragmented reporting and whistleblower accounts.
Grok remains active on the platform as of early January 2026 and no new regulatory action has been formally announced, a situation that has intensified questions about how accountability can realistically be enforced when AI systems operate across borders, evolve rapidly, and continue to outpace the legal and institutional frameworks designed to govern them.