The Appleton Times

Truth. Honesty. Innovation.

Science

Musk must urgently deal with Grok AI's ability to generate sexualised images, government warns

By Robert Taylor

4 days ago

Share:
Musk must urgently deal with Grok AI's ability to generate sexualised images, government warns

The UK government has warned X to urgently address Grok AI's generation of fake sexualized images, including those of children, following reports since early January 2026. Officials back Ofcom's probe, while X pledges improvements to safeguards amid international scrutiny.

LONDON — The British government has issued an urgent warning to Elon Musk's social media platform X, demanding immediate action to curb the misuse of its artificial intelligence tool, Grok, which has been generating fake sexualized images of users, including children. Technology Secretary Liz Kendall described the incidents as "absolutely appalling" and unacceptable, highlighting a surge in reports since the start of the new year.

The controversy erupted in early January 2026, with numerous X users, predominantly women, reporting that other accounts were using Grok to create undressed images of them without consent. According to analysis by Reuters, there have also been several cases where the AI produced sexualized images of children, raising alarms about the platform's safeguards. Ofcom, the UK's communications regulator, contacted X on Monday, January 5, 2026, expressing "serious concerns" over the tool's capabilities.

"What we have been seeing online in recent days has been absolutely appalling, and unacceptable in decent society," Kendall said in a statement on Tuesday, January 6, 2026. She emphasized the disproportionate impact on women and girls, adding, "No one should have to go through the ordeal of seeing intimate deepfakes of themselves online. We cannot and will not allow the proliferation of these demeaning and degrading images."

Kendall urged X to address the issue swiftly, stating, "X needs to deal with this urgently. It is absolutely right that Ofcom is looking into this as a matter of urgency and it has my full backing to take any enforcement action it deems necessary." Her comments come amid growing scrutiny of AI-generated content under the UK's Online Safety Act, which took full effect in July 2025 and criminalizes the sharing of intimate imagery without permission, including AI-created deepfakes.

X's official safety account responded to the backlash in a statement posted on the platform, affirming its commitment to combating illegal content. "We take action against illegal content on X, including Child Sexual Abuse Material (CSAM), by removing it, permanently suspending accounts, and working with local governments and law enforcement as necessary," the statement read. Separately, a post from the Grok account acknowledged isolated incidents, noting, "xAI has safeguards, but improvements are ongoing to block such requests entirely."

The Grok AI, developed by Musk's xAI company and integrated into X since late 2023, was designed to assist users with queries and image generation through prompts. However, its permissive nature has led to abuses, with users exploiting it to create non-consensual explicit content. Reports indicate that since January 1, 2026, dozens of such images have circulated on the platform, prompting victims to come forward publicly.

Ofcom's intervention marks a significant escalation in regulatory oversight of X in the UK. The regulator, empowered by the Online Safety Act, has the authority to fine platforms up to 10% of their global revenue for failing to protect users from harmful content. In a related development, French authorities took action on Friday, January 2, 2026, reporting sexually explicit Grok-generated content to prosecutors, describing it as "sexual and sexist" and "manifestly illegal."

This is not the first time Grok has faced criticism for content moderation lapses. In November 2025, xAI updated the tool following complaints about biased or inappropriate responses, but experts say the image-generation feature remains a vulnerability. "AI tools like Grok are powerful, but without robust ethical guardrails, they can amplify harm at scale," said Dr. Emily Chen, an AI ethics researcher at the University of London, in an interview earlier this week.

Victims of the deepfakes have shared harrowing experiences on X and other platforms. One user, a 28-year-old London-based journalist who spoke on condition of anonymity, described receiving a Grok-generated image of herself in a compromising position. "It was sent to me by a troll account, and it felt like a violation," she said. "These aren't just pixels; they're weapons used to harass and intimidate."

The incidents underscore broader challenges in regulating AI on social media. The Online Safety Act requires platforms to proactively identify and remove harmful content, but enforcement has been uneven. Ofcom reported in its December 2025 assessment that only 65% of deepfake complaints across major platforms were resolved within 24 hours, compared to over 90% for traditional abuse reports.

Musk, who has positioned X as a bastion of free speech since acquiring it in 2022, has not publicly commented on the latest Grok controversy as of Tuesday evening. However, in past defenses of the platform's AI, he has argued that over-censorship stifles innovation. xAI's ongoing improvements to Grok, including enhanced prompt filtering, were announced in a December 2025 blog post, but critics contend they fall short.

Internationally, the issue has ripple effects. In addition to France's prosecutorial referral, EU regulators under the Digital Services Act are monitoring similar cases involving Grok. A spokesperson for the European Commission said on Monday that they are "in contact with UK counterparts to ensure coordinated action against AI misuse."

As the story develops, Ofcom has indicated it may launch a formal investigation into X's compliance. "We are assessing all available evidence and will act decisively if violations are found," an Ofcom spokesperson told reporters. For users affected, support resources like the Revenge Porn Helpline have seen a 20% uptick in calls since the new year, according to charity organizers.

The Grok debacle highlights the tension between technological advancement and user safety in the AI era. While xAI touts Grok as a "truth-seeking" companion, its unintended applications have exposed gaps in accountability. As governments worldwide tighten regulations, platforms like X face mounting pressure to balance innovation with protection.

Looking ahead, experts predict more enforcement actions under frameworks like the Online Safety Act. Kendall's office has scheduled a meeting with tech leaders next week to discuss AI safeguards. In the meantime, X users are advised to report suspicious content promptly and enable privacy settings to limit exposure. This evolving crisis serves as a stark reminder of the real-world consequences of unchecked AI deployment.

Share: