UK Prime Minister Keir Starmer has announced strict new rules requiring social media companies to remove non-consensual intimate images within 48 hours of an alert. This move specifically targets the misuse of AI tools like Grok, which have been used to generate vile images, and is part of a broader government effort to root out violence against women and girls in the digital space.
Under the new regulations enforced by Ofcom, victims can flag content directly to tech firms, triggering a cross-platform alert. This system is designed to remove the burden from victims having to repeatedly report the same image. Furthermore, authorities are exploring digital watermarking technologies to automatically detect and block “revenge porn” if it is reposted on other sites.
The crackdown follows international backlash against Elon Musk’s AI chatbot, Grok, for facilitating the creation of sexualized deepfakes. The Online Safety Act now makes it illegal to create or share non-consensual sexual deepfakes, and internet providers will receive guidance on blocking rogue sites that host such explicit, AI-generated content.