Big Issues

Is This The Year We Finally Address The Deepfake Crisis?

seeing double of taylor swift in different colours layered over each other

Want more Junkee in your life? Sign up to our newsletter, and follow us on Instagram, Twitter and Facebook so you always know where to find us.

The Deepfake Nightmare is getting worse but change may be Swift.

Last week, AI-generated explicit pictures of Taylor Swift went viral on X (formerly Twitter). The response was immediate. Fans tried to protect her by posting real images. X blocked any searches for the singer and issued a statement about their “zero-tolerance policy towards such content”. Even the White House weighed in, declaring the spread of the images as “alarming” and called for the government to “take legislative action”.

Taylor Swift might be the most famous person this has happened to, but she’s far from alone. About this same time last year, Twitch streamer Atrioc (Brandon Ewing) accidentally switched tabs while streaming. The tab he didn’t mean to show his audience featured a deepfake porn video of two female streamers — Pokimane and Miya Higa.

Deepfakes (a combination of ‘deep learning’ and ‘fake’) are a relatively new AI technology used to put a person’s face on a body that isn’t theirs. The technology gained popularity for the ability to make videos of celebrities and politicians saying and doing things out of character — Jennifer Aniston hosting the world’s largest MacBook Pro giveaway, Barack Obama declaring Ben Carson was in “the sunken place” or Ellie Goulding being pushed off a stool by WWE wrestler Bayley (the original push victim was Charlotte Flair). 

The Atrioc controversy brought to light the prevalence of deepfake porn targetting women who stream. Having been targets themselves, streamers QTCinderella and Sweet Anita spoke out against the production and consumption of such pornography. Despite this ongoing conversation, not everyone seems to understand the basics of consent — and how deepfakes violate it. The producers of this content and website hosts don’t. Atrioc certainly didn’t when he chose to pay for pornography portraying women who hadn’t given their consent. And the people sharing the images of Swift certainly didn’t.

The Problem With Deepfake Porn 

Similar to leaked nudes, deepfake porn allows the consumer to view what they wouldn’t be given permission to see. The technology has already caused a great deal of concern because people are not able to discern when it is real and when it isn’t. This invades the privacy of deepfake victims. It sexualises and violates them. 

Previously, the most heinous example of sexual crimes online was revenge porn. Celebrity nudes leaked routinely, the most notorious of which was the 2014 celebrity iCloud hacking that losers online referred to as “the fappening”. There was also the image-based abuse website Is Anyone Up?, which was taken down in 2012. It took a while for these instances to be seen as bad and for there to be real consequences for these actions because at the time, there was no law against revenge porn (hacking, though, was illegal). 

All these years later, the Atrioc case shows that one thing remains unfortunately clear: there are people who don’t (or refuse to) understand the basics of consent. On X/Twitter, which  is a horrible place, a lot of male users attacked the streamers who spoke out by sending them photos of those streamers in “revealing” clothing. A tweet about Pokimane went viral as chronically online man “Bowblax” her a hypocrite for calling out people who sexualise her — while daring to eat a banana. Scandalous. 

The community pointed out — though it shouldn’t have been necessary — that Pokimane turned the camera off to avoid having clips of her eating used out of context. “Bowblax” was forced to admit he hadn’t actually watched the video and that he had an impulsive posting problem. Still, it’s wild that in this day and age a woman can’t even eat without being sexualised against her will.

The Basics Of Consent

It’s additionally wild that people old enough to have a twitter account don’t understand the basics of consent. None of the women used in deepfake porn gave it. And even if they had, they could have revoked it at any point. Having consent in one situation doesn’t mean you have it in another.

Choosing to wear a low-cut shirt or a bikini doesn’t mean that millions of people are entitled to have access to a different, indistinguishable image of their body naked. Even if the pornography was real, videoing a sexual act and having it posted online are also two different scenarios that require separate consents.  

The missing ingredient in the deepfake porn phenomenon (aside from morals and integrity) is that of agency. None of the women involved had any control over how their likenesses were used. Someone is profiting off naked images of women who never wanted to monetise their bodies in the first place. 

My Blonde GF provides an excellent exploration of just how damaging deepfake porn can be. The short documentary tells the story of a woman named Helen who finds out her face has been transposed onto another’s body for porn and how that impacts her life. Helen explains that while she’s aware there’s nothing that was physically done to her, she can’t look at the unaltered pictures that were used without feeling like she’s looking at a picture of an assault. 

Helen shares in the documentary that she received a phone call from a police officer who had told her they weren’t able to do anything about what had happened to her. Despite going through this violating experience, the endless nightmares, fear and anxiety, the images couldn’t be considered as malicious communication or as revenge porn because they “weren’t real”. The impact for Helen, though, was very real.

What Happened To Atrioc?

Atrioc apologised on stream for the purchasing and sharing of the deepfake materials. And while it was no ukulele apology, there were some flaws. He never directed his apology to any of the women who were violated and hurt by his decision, opting instead to use “to anyone I’ve hurt”. He also says he doesn’t want to make excuses but then does so and explains he’s only done this once, which is still one too many times. 

Many pointed out that having his wife sitting next to him throughout makes the apology come across as  more of an “I’m not sexist, I married a woman” message. However, it looks like the apology was genuine as Atrioc has spent the months since the incident working with lawyers and using his financial power to remove similar deepfake content. 

The Current State Of The Deepfake Crisis

While the Atrioc story gives a happy ending of growth, we, as Jordan Peele puts it in his Obama deepfake video, are walking into a dystopia. 

A large part of this crisis is how accessible AI and deepfake technology are. Arwa Mahdawi points out that you don’t need to be on the dark web to view them — creators use Discord to advertise and customers can pay with their regular bank cards. A considerable amount of victims have been minors. Deepfake porn of 30 high school girls in New Jersey was sent around the school and in Spain 20 young girls were sent photos of themselves stripped via AI. Aside from the impacts this has on the lives of these girls, this technology is also making it easier to create and spread child porn.

A similar case, where faked nude photos were part of a wider bullying campaign, resulted in a 14-year-old girl taking her own life. Social media was already contributing to a mental health crisis for teen girls — deepfakes make it even worse.

And we don’t appear to be taking it seriously. There’s a reality show on Netflix called Falso Amor which invites people to watch videos of their partners cheating. The couple that works out if the footage is real or deepfaked the most wins 100,000 euros and trust issues! 

They may seem relatively innocent and harmless, but celebrity deepfakes (Swift being the most recent example) show just how out of control things can get. In 2023, Tom Hanks issued a warning about an advert that used his likeness without permission. There was also the AI “Yearbook Trend” on social media and the trend of putting celebrities in different scenarios — like imagining Rihanna or Ariana Grande as a barista. When Fifth Harmony member Lauren Jauregui tweeted a complaint about her image being used without permission, many responded with their own AI-generated images of the singer. 

Even Jonghyun, a member of the K-pop group Shinee who passed away in 2017, has been used in these AI trends, much to the horror of Shawols (Shinee fans). His image has been used to create photos of him with a birthday cake featuring the number 33 (the age he would’ve been in 2023). There are also photos of him and the other four Shinee members and, ghoulishly, his voice has been used for AI covers of songs Shinee has released since his passing. 

I swear Black Mirror seasons used to take time to catch up — I didn’t expect ‘Joan is Awful’ to be here as soon as I finished the episode. 

How Can We Stop Deepfakes?

Governments have shown their ability to take action on social media platforms quickly — America with its attempted ban and investigation of TikTok and Australia when it came to sharing news on FacebookBoth of these cases were swift and timely. We need governments to outlaw and restrict the use and creation of deepfake and AI for pornography and misinformation purposes. There were too many years in between the start of revenge porn and there being legal action taken — we can’t afford slow responses to this. 

Another tangible action is for banks and credit card companies like Visa and Mastercard to ban the use of their services on these websites. We’ve seen that they have the power to when they briefly backed out of OnlyFans (a platform sex workers use for their consensual and ethical made porn). It would be nice to see them do so in these cases.

Otherwise, the best thing we can do is learn how to spot deepfakes — and avoid sharing or purchasing them. In Australia, deepfake and revenge porn fall under the Online Safety Act of 2021. The government’s eSafety deepfake page states: “eSafety investigates image-based abuse which means sharing, or threatening to share, an intimate photo or video of a person online without their consent. This includes intimate images that have been digitally altered like deepfakes.”

According to eSafety, some of the ways to know if a video or image is a deepfake include checking for glitches or pixelations around the eyes and mouth, looking for unnatural or irregular movements, badly synced sound or skin discolourations. AI’s difficulty with hands and feet can provide clues — unless those body parts are cropped out. Take this photo of a French protester hugging a riot officer, supposedly from a rally. The photo looks real until you spot the six fingers on the officer’s hand.

Deepfakes are a problem we can no longer ignore. Without effective regulation and action, we, as Twitter user Lee Madgwick said, “are sleepwalking into a nightmare”.