The manosphere ‘wizards’ behind the deepfake epidemic sweeping social media

Mid adult man typing on laptop in dark room Manosphere forums are full of pictures of naked women uploaded by men, without their consent, often disturbingly edited (Picture: Getty Images/Westend61)

It’s a weekday night and I’m deep in the manosphere forums again, clicking through pictures of naked women uploaded without consent by the thousands of men who habitually visit these disturbing message boards.

I scroll with a grimace, my face scrunching in anticipation of the horrors that unfold with each thumb swipe: a man offering to trade his ex-partner’s intimate photo, another sharing a rape fantasy about his classmate and dozens of requests for explicit deepfake ‘porn’ of women the men know personally.

One asks: ‘Wizards, I can’t stand this bitch. I need to see her get degraded or used using AI, anything you can do?’

While other requests include:

‘Hopefully any wizard out there can make AI nudes of my ex’.

‘Let’s see my son’s teacher face f****d. Please wizards’,

And: ‘Oh mighty wizards, do whatever you want to her’.

I once even witnessed an elderly man request an explicit deepfake of his deceased wife, using a black-and-white image from their wedding day- taken before the internet ever existed.

For those not in the know, ‘wizard’ is the affectionate nickname given to creators of explicit deepfakes by their online community. And misogyny is their magic trick.

As a survivor of image-based abuse, my involvement with manosphere forums began out of self-preservation- a desperate attempt to track down my own intimate images as they were shared and traded online.

Jess has firsthand experience of being exploited by online ‘wizards’ (Picture: Rhiannon Holland Mefus photography This Is Not Right

On November 25, 2024 Metro launched This Is Not Right, a campaign to address the relentless epidemic of violence against women.

With the help of our partners at Women's Aid, This Is Not Right aims to shine a light on the sheer scale of this national emergency.

You can find more articles here, and if you want to share your story with us, you can send us an email at vaw@metro.co.uk.

Read more:

Nowadays, as the author of ‘No One Wants to See Your D*ck: A Handbook for Survival in a Digital World’, I investigate these spaces as part of my research and campaign work.

On these web pages, the creators of explicit deepfakes are worshipped as gods, kings and you guessed it- wizards. They are genies who grant the wishes of the world’s worst misogynists, using their so-called magic (AI technology) to strip women of their bodily autonomy and rain artificial semen onto their faces.

There is a sense of camaraderie in these forums- men validating one another’s predatory behaviour, taking turns to fulfil criminal requests. Together, they create a wave of mass destruction that permanently alters women’s digital footprints and the way they get to exist in the world.

Explicit deepfakes, often referred to as deepfake ‘porn’, have surged in popularity in recent years. This is largely due to the proliferation of AI technology, which allows users to remove a woman’s clothes in an image and replace them with a computer-generated naked body within seconds.

A human in a metaverse: An AI evolving, creating unique meta avatars in a digital universe. Firewall against viruses, creating copies. Explicit deepfakes have surged in popularity in recent years (Picture: Getty Images)

This harm has been amplified over the last week, with X’s AI chatbot Grok churning out exploitative images that has stripped women’s clothing every minute- all prompted by real-life users.

However, AI technology being weaponised by the manosphere is nothing new.

Often referred to as nudification or ‘nudify’ tools, this technology is constantly evolving. Updates now allow users to select a victim’s clothing, pubic hair style and even age, before pasting them into explicit sex scenes without consent.

Once confined to niche websites and Telegram bots, we’ve now seen how this technology is easily available on X – one of the largest mainstream social media platforms in the world, with over half a billion users, some as young as 13.

It is estimated that up to 98% of all deepfake images and videos online are sexual, with 99% featuring women and girls. One nudification site I visited boasted over 100,000 daily users. Meanwhile, the number of victims created by Grok’s AI technology in the last week alone leaves a sombre thought.

Becca has also been a victim of explicit deepfake imagery (Picture: Rob Caddy)

Technology and science journalist Becca Caddy was targeted by an AI email scam that used a nudification tool on photos she’d previously posted online, to try and blackmail her.

“I wanted to cry at seeing my likeness violated this way. I wanted to hide because, coupled with the photos, the threats felt way more threatening than they would from a usual scam email” she told me. 

In a calculated attempt to scare Becca into sending money, the perpetrator outlined the damage the images would cause to her personal life, career and mental health- exploiting the fear, stigma and isolation women face as victims of image-based abuse.

Becca later described the emphasis on her mental health as “one of the most disturbing parts of the email.” She chose to post about the threats publicly, taking back some control. Although police took a copy of the email, the case did not progress.

Side view of woman using laptop sitting in bedroom at home It is estimated that up to 98% of all deepfake images and videos online are sexual, with 99% featuring women and girls (Picture: Getty Images/Maskot)

Creators and fans of nudification tools, including Grok, routinely downplay the harm– insisting the images are “not real” and claiming, “there is no victim”. But for the women whose likenesses are used to fulfil men’s fantasies, the violation feels brutally real.

Jodie*, a survivor-campaigner of image-abuse, found out through an anonymous email that her photographs were being abused and manipulated online by a man she knew, with the help of other forum members.

These deepfakes made it look like Jodie was having sex with different men. The original pictures had been snaps of her on holiday with her mum, or out with her friends. 

When she opened the forum, she found posts encouraging other users to create even more. 

“He even offered personal details about my life to lure them in” Jodie later wrote in an article for Metro.

Jodie's story

I was just 15 years old when I was raped. For months, I barely lived through the shame. 

When I eventually confided in my teacher, I was told: ‘But he’s such a nice boy. He would never do that’.

Only a few years later, I would face yet another horrifying form of betrayal. It began with anonymous online attacks, fake profiles, impersonations and the solicitation of sexual activities under my name, all using images from my social media profiles. 

The nightmare escalated. I received an anonymous email containing a link. There, on an alternative porn website, were doctored images and videos with my face on other people’s bodies: ‘deepfakes’ made to look like I was having sex with men I’ve never met or ever seen.

Who could have done this? Why?

To my horror and disbelief, the man behind this abuse transpired to be someone I knew.

I felt the deepest sense of betrayal imaginable. To him, my suffering was just a momentary thrill. A compulsion. Something to indulge in and discard. 

Read more here.

“I didn’t just lose control of my image; I lost control of how I saw myself. That’s the reality and lasting harm for survivors of this abuse. You start questioning whether you can trust your friends, whether people are looking at you differently, whether it’s even safe for your nearest and dearest to take photos of you.’

Jodie is one of many women in the UK whose bodily autonomy has been stolen using AI technology. And on New Year’s Day, I became one of them.

After I spoke out on X about the harm occurring on the platform- including users prompting Grok to generate images of women stripped, covered in ‘donut glaze’ to emulate semen or even soiled- my post gained traction. With it came the bottom feeders who thrive on women’s humiliation.

“@grok put her in a bikini made of cling film” one user prompted, alongside a screenshot of my profile picture.

The chatbot complied.

People are asking Elon Musks's AI bot Grok to undress women - and the AI bot is complying Credit Getty / metro.co.uk People have been asking Elon Musks’s AI bot Grok to undress women – and the AI bot is complying (Picture: Getty / metro.co.uk)

Most victims of this abuse on X are women, reflecting the user base of nudification tools- many of which are programmed to only generate female genitalia. More alarmingly, Grok has also generated sexually suggestive images of minors.

In 2024, a survey by Internet Matters found 13% of teenagers had encountered a deepfake nude image. A 2025 report by the Children’s Commissioner found teenage girls were withdrawing from online- and offline- life out of fear of being targeted.

X is not the only platform implicated in this harm. Last year, Meta sued Joy Timeline HK Limited for advertising nudify bots on their platforms that promised users they could “see anyone naked”.

But while Meta moved swiftly under scrutiny, X has been far less proactive. A week after my own targeting, Grok continues to respond to prompts to remove women’s clothes and insert fake semen onto their bodies.

Jodie believes tech platforms and tools like Grok are not passive players in the digital harms towards women but are actively enabling this abuse. 

X’s chatbot Grok now allows users to generate AI images in others comment sections (Picture: Matteo Della Torre/NurPhoto via Getty Images)

“They have the power- and responsibility- to build safeguards, moderate properly and ban accounts that weaponise their tools. They simply don’t prioritise it, and vague regulation allows them to profit.”

In England and Wales, it is a crime to distribute an intimate image without consent, including AI-generated images. In June 2025, Parliament passed amendments criminalising the creation and solicitation of explicit deepfakes. However, despite receiving Royal Assent, its enforcement remains stalled- leaving victims with limited routes to justice.

Further gaps in protection also persist. Images featuring fake semen through prompts like ‘donut glaze’ and ‘white mucus’ are often excluded from criminalisation due to ambiguity around what constitutes as ‘intimate’. In December 2025, Baroness Charlotte Owen proposed amendments to close these loopholes, introduce a 48-hour takedown rule and enforce blocking laws. The government rejected them all.

Although ministers have since pledged to work towards banning nudification technology, how- or when- remains unclear.

Portrait Of Man Using Laptop In Dark Room Amendments to close loopholes in legislation were suggested to the government in December 2025, and all were rejected (Picture: Getty Images)

The UK’s regulator Ofcom has released a statement that it’s investigating “serious concerns” about Grok being used to “produce undressed images of people and sexualised images of children”, saying it has made “urgent contact” with X and xAI.

Now, campaigners including Jodie, NotYourPorn and Professor Clare McGlynn are urging immediate action from the government, instead of more announcements on strategies and further consultations. In a statement on Instagram, NotYourPorn said “Much of this work already exists and has been led by survivors, experts and frontline services for years” adding, “Too often, that work has been resisted or rejected.”

While enforcing the new legislation will be a welcome step towards better protecting women and girls online, it’s important we do not lull ourselves into a false sense of security that we can simply legislate our way out of online misogyny.

On the very day the law passed, I saw dozens of new forum requests:

‘I’d love to get anything done to my sister-in-law’
‘Please deepfake my cute ass cousin’
‘I really want to see my friend naked’

Six months on, this harm has migrated from underground forums to mainstream platforms- unfolding in real time as its tech’ bro owners act with impunity and its users act on an opportunistic nature to exploit women.

An image of a nurse edited to be made of codes, with the ChatGPT logo behind her. 'See-through bikini loophole meant Grok AI generated images of my genitalia' Has Maya Jama finally cracked the fight against Grok’s fake photos? Rihanna casts Elon Musk's trans daughter Vivian, 21, in lingerie campaign

We are running out of time to put the genie, and the wicked wizards of the web, back in the bottle. The government must listen to survivors and immediately implement the new legislation- and reconsider Baroness Owen’s amendments- if women are to have any hope of a safe, digital existence.

What was once hidden in the fringe corners of the internet is now overflowing into everyday platforms. From the ‘wizards’ of the manosphere to the richest man in the world, when women cannot share an image or an opinion online without risking AI abuse, it is our free speech that is under attack.

Who’s going to fight for it?

A statement on the X Safety account said: ‘We take action against illegal content on X, including Child Sexual Abuse Material (CSAM), by removing it, permanently suspending accounts, and working with local governments and law enforcement as necessary. Anyone using or prompting Grok to make illegal content will suffer the same consequences as if they upload illegal content.’

AI Article