As underage deepfakes of Jenna Ortega appear online, why isn't social media taking deepfake AI seriously?

A threat to women's safety.
Image may contain Jenna Ortega Face Head Person Photography Portrait Adult Blonde Hair Formal Wear and Accessories
Matt Winkelmeyer/MG23

The rapid rise of AI technology is scaring women — and for good reason. New deepfake AI apps make it possible for anyone to create realistic (usually pornographic) images of anyone they have a picture of. And now, certain social media companies have been allowing apps that facilitate the creation of deepfakes to advertise on their platforms.

NBC reports that Facebook and Meta recently ran ads for Perky AI, an app that allows users to “enter a prompt to make [real people] look and be dressed as you wish” for $7.99 a month. Eleven of the ads reportedly featured a manipulated image of actress Jenna Ortega taken when she was only 16 years old. The ads demonstrated, using Ortega's photo, how her clothing could be changed using AI — their examples included the prompts “Latex costume,” “Batman underwear” and “No clothes.”

Read More
Deepfake technology is a threat to all women – not just celebrities

We're calling on the government to take urgent action.

Image may contain: Taylor Swift, Accessories, Blonde, Hair, Person, Jewelry, Necklace, Formal Wear, Face, and Head

The existence of an app like Perky AI is disturbing enough. It is horrifying to imagine a future in which women lose control over their own image — in which women have to worry people could be creating and watching pornography of themselves without their knowledge or consent. It's easy to see why 91% of GLAMOUR readers think deepfakes are a danger to women, which is why GLAMOUR's Consent Campaign is fighting for them to be taken seriously by the UK government.

Instagram content

This latest story is a reminder that deepfakes are in danger of becoming normalised to the extent that they appear frequently and without regulation on social media. This isn't the first time this has happened. Earlier this year, pornographic deepfakes of Taylor Swift circulated on social media. “This content violates our policies, and we’re removing it from our platforms and taking action against accounts that posted it," said a spokesperson from Meta at the time. "We’re continuing to monitor, and if we identify any additional violating content, we’ll remove it and take appropriate action.”

Read More
The rise of AI could threaten the safety of women and girls, so why are we being left out of the discussion?

No ‘violence against women and girls’ organisations were invited to the AI Safety Summit 2023.

ai safety summit

Similarly, Meta apologised for the Perky AI ads retroactively. “Meta strictly prohibits child nudity, content that sexualises children, and services offering AI-generated non-consensual nude images,” Ryan Daniels, a Meta spokesperson, said in a statement.

Meta suspended Perky AI's page and removed its the ads. After NBC reached out to Apple about the app, it was also removed from the app store — however, the app can still be used by people who had previously downloaded it.

Social media companies like Meta and Apple have policies about the use of AI technology to produce deepfakes and as of 2024, the Online Safety Act has been updated to make the sharing of AI-generated intimate images without consent illegal in the UK. However, this story is a sobering reminder that not nearly enough is being done to prevent the rise of apps like Perky AI and the sharing of deepfakes online. Social media companies need to work harder to ensure that deepfake AI apps never appear on their platforms in the first place — and perpetrators need to be prosecuted to end this online abuse once and for all.