LONDON — The UK government has proposed stringent new rules requiring technology companies to remove non-consensual intimate images from their platforms within 48 hours of being reported, or face severe penalties including fines equivalent to 10% of their global revenue or outright bans from operating in the country.
Prime Minister Sir Keir Starmer announced the measures on Wednesday, emphasizing a zero-tolerance approach to online abuse. "We are putting tech companies on notice to take down non-consensual intimate images, and we will leave no stone unturned to protect women and girls," Starmer said in a statement from Downing Street. The proposal comes as an amendment to the ongoing Crime and Policing Bill currently under parliamentary review.
Under the new rules, platforms such as social media sites and search engines would be legally obligated to act swiftly on reports of abusive content, including deepfakes and other non-consensual explicit images. Failure to comply could result in multimillion- or even billion-pound fines for major firms, depending on their size. For context, companies like Meta or Google could see penalties in the billions, given their annual revenues exceed hundreds of billions of pounds worldwide.
Technology Secretary Liz Kendall underscored the urgency of the changes, stating, "The days of tech firms having a free pass are over. No woman should have to chase platform after platform, waiting days for an image to come down." Kendall highlighted that the government views intimate image abuse on par with terrorist content or child sexual abuse material, urging companies to prioritize it accordingly.
The initiative builds on recent legislative efforts to curb online harms. Earlier this month, creating non-consensual intimate images, including sexually explicit deepfakes, was criminalized under UK law. This followed widespread outrage in January over X's AI chatbot, Grok, which generated images undressing people without consent. X subsequently halted the feature after public backlash.
Media regulator Ofcom is also exploring ways to classify non-consensual intimate images similarly to child sexual abuse material. This would involve digital watermarking, enabling automatic detection and removal whenever such content is re-uploaded or shared across platforms. Officials say this technical approach could significantly reduce the persistence of abusive material online.
Additionally, the government plans to issue guidance for internet companies on blocking "rogue websites" that host this content and evade the Online Safety Act's reach. These sites, often operating from jurisdictions outside UK jurisdiction, have been a persistent challenge in content moderation efforts.
Opposition figures have criticized the timing and approach of the Labour government's proposal. Shadow Technology Secretary Julia Lopez, a Conservative, accused the administration of playing catch-up. "Once again, the government is playing catch-up to duck a major backbench rebellion," Lopez said. She noted that a similar amendment was proposed earlier by Conservative peer Baroness Charlotte Owen, but Labour had not acted on it at the time. "The reality is that, for all the prime minister's tough rhetoric, he has arrived late to this issue. He does not know what to believe—he only knows what to do to try and survive another week."
Lopez's comments reflect ongoing political tensions around tech regulation. The Conservatives had previously championed the Online Safety Act in 2023, which imposed duties on platforms to remove illegal content but stopped short of specific timelines for intimate image abuse. Labour, now in power since July 2024, has vowed to strengthen these protections amid rising concerns over AI-generated harms.
The announcement aligns with a broader crackdown on social media announced by Starmer earlier this week. This includes closing a legal loophole that previously allowed "vile illegal content created by AI" to proliferate unchecked. Downing Street has also initiated a public consultation on potential restrictions, modeled after Australia's under-16 social media ban, with provisions to implement such measures rapidly if recommended.
Internationally, the UK moves echo growing global scrutiny of tech giants. On Tuesday, Ireland's Data Protection Commission announced an investigation into X under EU privacy laws, focusing on the non-consensual deepfakes produced by Grok. The probe could lead to fines up to 4% of X's global turnover, compounding pressures on the platform owned by Elon Musk.
Experts and advocates have welcomed the 48-hour deadline as a practical step forward, though some question enforcement challenges. The National Society for the Prevention of Cruelty to Children (NSPCC) has long called for faster removals, citing cases where victims endured prolonged exposure to abusive images. "This is a vital measure to empower survivors and hold platforms accountable," said a spokesperson for the organization, though they added that education and prevention remain key to long-term solutions.
Background on the issue reveals a surge in reported incidents. According to Ofcom data from 2023, over 1,000 cases of image-based abuse were flagged monthly on major platforms, with AI tools exacerbating the problem by enabling easy creation of realistic fakes. Victims, often women and girls, report severe psychological trauma, including anxiety and social withdrawal.
The government's actions follow high-profile incidents beyond Grok. In recent months, apps and websites offering "nudification" services—AI tools that superimpose nudity onto images—have proliferated, prompting urgent interventions. Starmer referenced these in his statement, noting that the administration had already taken "urgent action against chatbots and 'nudification' tools."
Looking ahead, the amendment's passage through Parliament could occur by early 2025, pending debates and votes. If enacted, it would integrate into the UK's evolving digital safety framework, potentially setting a precedent for other nations grappling with AI ethics. For tech firms, the stakes are high: compliance could reshape content moderation practices, while non-compliance risks market exclusion in one of Europe's largest digital economies.
As the UK pushes forward, the balance between innovation and safety remains contentious. Platforms argue that rapid takedowns require advanced AI detection, which they are investing in, but critics like Lopez contend that government intervention must be proactive, not reactive. For now, the message from Westminster is clear: tech companies must act decisively, or face the consequences.
