The artificial intelligence firm, xAI, run by Elon Musk, is already being heavily criticized after its chatbot called Grok started being widely used to create sexualized photographs of women, many of them real, by a process that users call “digital undressing.” In other worrying reports, the AI was seen to produce images of minors, which raised grave legal, ethical, and safety issues that have been raised in various countries.
Grok, built directly into Musk-owned social media platform X, has enabled users to post or label pictures of individuals and have the chatbot strip clothing off or put individuals in suggestive positions. Though sexually explicit material can be carried on X, non-consensual sexualized content, especially of children, has come into the spotlight of regulation, research, and even digital safety activists.
The issue, according to the statistics collected by Bloomberg and reported by a team of independent researchers, grew out of control at the end of December. According to AI governing company Copyleaks, even though some adult content creators early on used Grok to create promotional images of themselves, the trend soon became widely spread to target women who had not given their consent. In a study, the European nonprofit AI Forensics examined over 20,000 images generated by Grok and 50,000 user prompts and found that more than half of them contained people in minimal attire, and 81% of them were female. Worryingly, approximately 2 percent seemed to represent persons who seem to be below the age of 18.
There were also instances where the users had asked minors to pose in erotic positions or sprinkle them with sexual fluids, which Grok apparently did. Such results imply that the AI safety guardrails had serious failures, especially because the acceptable use policy presented by xAI explicitly prohibits pornographic visuals of actual humans, as well as the sexualization of children.
Musk and xAI have claimed to take action. X has claimed to block out any illegal content, permanently suspend accounts that violate its rules, and work with law enforcement. Even Grok itself admitted the lapses of the safeguards in early January when it said that Child Sexual Abuse Material (CSAM) was unlawful and requested users to report violations of the law to authorities like the FBI and the National Center of Missing and Exploited Children.
Nevertheless, critics claim that such reactions were too late and not much was done to prevent the flood of sexualized images. Musk has always been against, internally, what he calls too much censorship and has encouraged Grok to be more liberal with his “spicy mode.” CNN has been told by sources knowledgeable about xAI that Musk was resistant to stricter content policies and frustrated about limitations that crippled the image-generating ability of Grok.
In the same breath, the already limited safety squad at xAI allegedly lost some of its key members during the weeks before the scandal, which cast more doubts on management. It has also been questioned as to whether xAI still relies on external CSAM detection devices and uses Grok itself as its approach, which some experts would call a risky approach.
The pressure is beginning to mount around the world with regard to regulation. The police in Europe, India, and Malaysia have initiated inquiries. Ofcom, the media regulator of Britain, said that it urgently contacted the companies of Musk regarding a matter of great concern, and the European Commission termed the content as “illegal, appalling, and disgusting.” The Ministry of Electronics and Information Technology of India has commissioned X to do a direct audit of the governance and protection of Grok.
According to legal experts, xAI may have serious repercussions. Although the law of the US under Section 230 provides a certain shield of liability against companies that have provided a platform on which individuals post their content, the law does not prevent the companies from enforcing federal crimes, including CSAM offenses. Besides, civil lawsuits can be filed by the victim whose likeness was utilized. The newly signed “Take It Down Act” laws further criminalize the distribution of non-consent explicit photographs, either of real or artificial nature, and require platforms to take them down immediately.
As the scandal intensifies, critics claim that the Grok episode highlights the risks of using effective AI systems without any strong protection, in particular, when used on a massive scale of social media coverage. In the case of xAI, the consequences can go well beyond reputational harm and can potentially transform the entire attitude of regulators toward AI accountability across the globe.