
Elon Musk stated on Wednesday that he is "not aware of any naked underage images generated by Grok," just hours before the California Attorney General initiated an investigation into xAI's chatbot over concerns regarding the spread of nonconsensual sexually explicit content. This denial from Musk comes amidst increasing scrutiny from governments across the globe, including authorities in the UK, Europe, Malaysia, and Indonesia. Users on the platform X have reportedly been requesting Grok to transform photos of real women, and in some instances, children, into sexualized images without consent. According to Copyleaks, an AI detection and content governance service, an alarming rate of one image was posted every minute on X. A sample collected from January 5 to January 6 revealed a staggering 6,700 instances of such content posted within a single hour. California Attorney General Rob Bonta condemned this behavior, stating, "This material has been used to harass people across the internet," and urged xAI to take immediate steps to prevent further issues. The investigation will examine whether xAI has violated any laws, as several protections exist against nonconsensual imagery and child sexual abuse material (CSAM). Notably, in 2022, the Take It Down Act was enacted to criminalize the distribution of nonconsensual intimate images, including deepfakes, mandating that platforms like X remove such content within 48 hours. Grok has been responding to user requests for the generation of sexualized images since late last year. This trend gained traction after certain adult content creators encouraged Grok to produce explicit imagery as a marketing strategy, prompting other users to follow suit. High-profile cases have emerged, including instances involving celebrities like "Stranger Things" actress Millie Bobby Brown, where Grok altered real photos of women in overtly sexual manners. In response to the backlash, xAI has reportedly implemented some safeguards. Grok now requires a premium subscription for certain image requests, although even then, the generated images may not be explicit. April Kozen, VP of marketing at Copyleaks, noted that Grok's responses may be more generic or toned down for regular users while appearing more lenient towards adult content creators. "These behaviors suggest that X is exploring various strategies to mitigate problematic image generation, though inconsistencies remain," Kozen remarked. Despite the growing concerns, neither xAI nor Musk has directly addressed the core issues. Shortly after the problematic instances began surfacing, Musk appeared to trivialize the situation by asking Grok to create an image of him in a bikini. On January 3, X’s safety account stated that the company acts against illegal content, including CSAM, but did not specifically address Grok's lack of safeguards regarding the generation of manipulated sexual imagery. Musk clarified that he was "not aware of any naked underage images generated by Grok," but his statement does not rule out the existence of other sexualized edits. Legal expert Michael Goodyear emphasized the importance of Musk's narrow focus on CSAM, given the more severe penalties associated with it. He added that Musk's remarks seem to downplay the broader issue of user-generated content. As the investigation unfolds, the California AG is not the only authority scrutinizing xAI. Indonesia and Malaysia have temporarily blocked access to Grok, while India has called for immediate technical changes. The European Commission has ordered xAI to retain all documents related to Grok, possibly paving the way for further investigations, and the UK's Ofcom has initiated a formal inquiry under the Online Safety Act. Grok has previously faced criticism for its handling of explicit material. AG Bonta highlighted that Grok features a “spicy mode” intended for generating explicit content. An update in October made it easier for users to bypass existing safety guidelines, leading to the creation of hardcore pornography and graphic sexual images. As the landscape of AI-generated media evolves, experts like Copyleaks co-founder Alon Yamin stress the urgent need for detection and governance to prevent misuse. "When AI systems allow the manipulation of real people’s images without clear consent, the impact can be immediate and deeply personal," Yamin stated, underscoring the ethical implications of AI advancements in image generation.
Nvidia is set to launch its annual GTC developer conference next week in San Jose, California, with the highly anticipat...
TechCrunch | Mar 12, 2026, 23:45
Rivian has unveiled the specifications and pricing details for its highly anticipated R2 SUV, but customers eager to pur...
TechCrunch | Mar 12, 2026, 21:00
In a bold move reflecting the growing influence of artificial intelligence, Atlassian, the Australian productivity softw...
TechCrunch | Mar 12, 2026, 17:45
In an exciting development for AI enthusiasts, Perplexity has introduced its latest innovation: the 'Personal Computer.'...
Ars Technica | Mar 12, 2026, 17:45
The International Imaging Technology Council (Int’l ITC) has raised concerns against HP regarding recent firmware update...
Ars Technica | Mar 12, 2026, 20:35