When justice fails: Why women can’t get protection from deepfake AI abuse

Whispers follows him offline. Online, harassment explodes, unchecked: comments, taunts, shares, screenshots. He never agreed to any of it. That didn’t stop anyone.

Within minutes, thousands of people had viewed its contents. In a matter of hours, millions.

The nightmare has just begun.

Days passed before the platform responded. By then, the image has been viewed, saved, and replicated. He was left asking: Who should I report this to? Will anyone believe me? Will the person who did this face the consequences? Or will the blame fall on me?

This is a reality that thousands of women and girls experience every day. AI fakes destroy real life and justice remains out of reach for most survivors.

The story could be yours.

Deepfake abuse is the sharp edge of a broader pattern of digital violence targeting women and girls. It is gendered and increasingly so. Today, systems designed to protect society have failed, while the tools for inflicting harm become cheaper, faster, and easier to use every day.

Here’s what you need to know:

What is deepfake abuse and how common is it?

Deepfakes are images, audio, or video manipulated by artificial intelligence (AI) that make someone appear to say or do something they never did.

This technology itself is not new, but the use of technology on women and girls is a new and rapidly growing phenomenon.

  • Deepfake pornography accounts for 98 percent of all deepfake videos online, and 99 percent depict women, according to a 2023 report.
  • deepfake videos are expected to be 550 percent more common in 2023 compared to 2019
  • the tools to create them are widely available, usually free, and require little technical expertise
  • once posted, AI-generated content can be endlessly replicated, saved to personal devices, and shared across platforms, making it nearly impossible to completely remove.

Why survivors don’t come forward and what happens if they do

Lack of reporting is one of the biggest barriers to accountability. For survivors who dare to come forward, the justice system is often another source of trauma.

  • Survivors are asked repeatedly to view and describe abusive content to police, lawyers, and platform moderators, and often they also face questions like, “are you sure it’s not real?” or “have you shared intimate photos before?”
  • If a case goes to court, their clothing, relationships and past behavior will be scrutinized, not the perpetrator
  • The dangers of not staying online, according to a UN Women Surveywhich found that 41 percent of women in public life who experienced digital violence also reported facing offline attacks or harassment related to the violence.

Why deepfake creators are rarely prosecuted

Despite the devastating impact, prosecutions are rare, platforms often fail to take action, and survivors are often retraumatized when they try to seek help. Here’s why:

The law has not yet caught up as less than half of countries have laws addressing online abuse and even fewer have laws specifically covering AI-generated deepfake content

  • most “revenge porn” or image-based harassment laws were written before deepfakes existed, leaving gaping loopholes
  • in many countries, deepfake pornography or AI-generated nude images fall into a legal gray area
  • survivors are unsure whether the abuse was illegal and whether the perpetrators can be prosecuted

Enforcement is still slow because even though the laws are in place, investigators need digital forensics expertise, cross-border coordination, and platform collaboration to build a case, while most justice systems do not have sufficient resources to do these things.

  • evidence disappears quickly as content spreads and copies multiply while perpetrators hide behind anonymity or operate across jurisdictions
  • platforms are slow or unwilling to share data with law enforcement, especially in cases across national borders
  • digital forensics backlog means cases stop before they even start

Technology platforms are no longer viable because they have long hidden behind “middleman” status to avoid responsibility for user-generated content.

What should happen now

While there are a number of countries and regions taking action (see text box below), stopping deepfake abuse requires immediate and coordinated action from governments, institutions, and technology platforms.

Here are five things that need to happen:

1. Actual laws cover deepfake abuse

Governments should pass laws with a clear definition of AI-induced abuse and a focus on consent, strict liability for perpetrators, fast-track takedown obligations for platforms, and cross-border enforcement protocols.

2. A justice system that can investigate and prosecute

Law enforcement requires specialized training, resources and capacity to collect and preserve digital evidence while the digital forensic backlog is addressed, so that international cooperation frameworks are fast, functional and fit for purpose.

3. The platform is responsible

Tech companies should be legally required to proactively monitor and remove content that violates required time limits, cooperate with law enforcement, and face real financial consequences if they fail to take action.

4. Real support for survivors

Law enforcement and trauma-informed legal professionals and free legal assistance must be available.

5. Education that prevents abuse

Digital literacy, including consent education, online safety, and what to do when experiencing harassment, must start early and reach everyone because prevention is as important as prosecution.

UN Women warns that this is not an internet-specific problem: “This is a global crisis.”

  • in a recent high-profile case, British journalist Daisy Dixon discovered an AI-generated sexual image of herself on X in December 2025, created using the platform’s Grok AI tool; it took days for platforms to geo-block the function, while abuse continued to spread
  • deepfake abuse can be an online catalyst for so-called “honor-based crimes” in certain cultural contexts, where violating respect norms on digital platforms can result in extreme physical violence against women, or even death
  • more than half of deepfake victims in the United States have thought about suicideaccording to the latest research

Meanwhile, some jurisdictions are starting to act:

  • Brazil amend its criminal code by 2025, increasing penalties for causing psychological violence against women who use AI or other technology to change their image or voice
  • That European Union the artificial intelligence (AI) act imposes transparency obligations around deepfakes
  • That United KingdomThe Online Safety Act prohibits the sharing of explicit, digitally manipulated images, but does not address the creation of deepfakes and may not apply if intent to cause distress cannot be proven.
  • That United States of America The Take It Down Act explicitly covers AI-generated intimate imagery and requires removal from platforms within 48 hours

Check Also

Chelsea were beaten by Everton in their fourth successive defeat

Chelsea were crushed 3-0 by Everton in a Premier League match at Goodison Park on …

Leave a Reply

Your email address will not be published. Required fields are marked *