How pornography broke our online reality.

“I’m going to show you some magic.” It’s Tom Cruise on a TikTok video. He’s talking to the camera, holding up a vintage coin while flashing his pearly whites. “It’s the real thing,” he reassures the viewer, before making the coin disappear in the palm of his hand. “It’s all the real thing,” he says, as he motions at his face. Except it isn’t. Or at least, not according to the man behind the account, Miles Fisher. But to viewers of the viral videos on the app, despite the name of the account (@DeepTomCruise) clearly alluding to the use of deepfake technology, the videos are so real-looking that despite being told the contrary many still refuse to believe that the person in the video is anyone other than Tom Cruise himself. One user commented assuredly “IT HAS TO BE THE SAME PERSON”, whilst plenty of others said they simply didn’t know what to believe.

Such content is only the latest exhibit of the new heights being reached by deepfakes. The TikTok’s creator, professional visual FX artist Christopher Ume, is about as new to the technology as the technology is to popular culture. Yet despite only first hearing about it in late 2018, already he finds himself at the frontier of its possibilities, chief sorcerer of some very spooky interferences with our audio-visual grounding in reality. Ume is extremely talented, and doctored the convincing videos by marrying footage of Fisher (a Tom Cruise impersonator) with artificial intelligence (AI) deep learning techniques, hence the term deepfakes, before using his skills in video manipulation to perfect the finished result. That such a disturbing production would show up on TikTok should really come as no surprise. Ume believes that the sort of digital wizardry that’s behind his impressive forgeries could soon become widely accessible, perhaps as some kind of snapchat filter in a matter of a few years.

The uncanny valley (the point in aesthetics where something approaches, but doesn’t quite reach, resemblance to a real human being) certainly appears creepy to most people, but if there’s anything that @DeepTomCruise proves it’s that surpassing it and getting closer to being indistinguishable from the real thing is hardly more comforting, and carries some pretty dark implications for our society. Since their inception deepfakes have been mired in controversy, used as a means for the rather horrific end of violating women – invading upon their privacy and pillaging their identities. They are so named after an anonymous reddit user of the same name, who back in 2017 pioneered their use in the fabrication of porn videos that had the faces of famous celebrities stitched on to the bodies of other women before posting the results on the site.

Screens from @DeepTomCruise / TikTok

At the time these were the first demonstrations of an amateur weaponising open-source AI and using it to create convincing footage, and the fact that women are chosen as the targets of this weaponry should not be ignored. Now, just 3 and half years later, the internet is riddled with deepfake pornography. And whereas before harnessing AI for these purposes required a bit of technical know-how and programming expertise, there now exists a proliferation of apps and easy user interfaces that make it a piece of cake. This has led to such horrors as revenge porn as well as a flood of celebrity sex tapes, all of which are completely fabricated and never took place. Deepfakes used for these pornographic purposes are acts of digital violence against women, prostituting out their identities without their consent. And as more people are increasingly anxious about what this technology might mean for the future of our politics, of our media, and of our society, it’s important to first understand them as a Frankenstein’s monster borne out of age-old misogyny that seeks to force women to bend to men’s will.

It’s not hard to get carried away by your limitless imagination of the kinds of scary scenarios that deepfakes could make possible now that they have become increasingly easier to replicate. However, small consolation can be found in the fact that we have been here before, just with a different medium: images. Image doctoration is so normalised and accepted now that it doesn’t really register as a problem anymore for most internet users. An obvious example that comes to mind is photoshopping, but this exists on the far end of a spectrum that reaches down to selfie-tweaking software such as FaceTune and onwards to trivial alterations like Instagram filters and basic saturation tools. The end result of this has ultimately been that photos have lost their authenticity as a record of something that has happened, and people are more distrustful of a static image online. I’m sure that at some point we’ve all seen a fake screenshot of a tweet by a celebrity or politician, and while most people can clock that they’re spoofs I have to wonder if they would be met with the same scepticism if there was a convincing deepfake of the person speaking the same words.

Precisely because of image alterations and manipulations becoming normalised, I feel this has only shored up people’s trust in the verifiable nature of videos online. Yes – videos can be taken out of context, spliced and edited in a whole manner of different ways. But you couldn’t make someone say or do something they didn’t say or do. That is, until now. And if a picture is worth a thousand words, how much is a video? Striking war photography by brave journalists in Vietnam managed to stir up mass opposition to the war back home, while last summer a single video of police brutality ignited a global resurgence of the Black Lives Matter movement. What photo journalism used to achieve last century ordinary internet users can manage ten-fold with a short video in this one – all filmed on a smartphone. And now with those same smartphones we can swiftly bend footage to our will in collaboration with the power of AI. For instance, here’s a quick GIF I rustled up of my face implanted on to Joe Exotic’s:

At a time when most of what this tech is used for currently is non-consensual pornography along with crude mashups of pop culture clips and people’s facial features, all of this panic over the shattering of reality can sound heavily conspiratorial and just a bit tin-foil-hat-batshit. Yet these harmless examples are really the seeds of a much larger shake-up in how we consume and process media online, and you don’t need a crystal ball to get a visual of the sort of changes that lie ahead.

In 2019 a fake circulated of American politician Nancy Pelosi, with the speed distorted and the video chopped up to give the impression that she was stammering her words and sounding like she’d had one too many wines. The video did the rounds on conservative corners of social media, helped along with retweets by figures such as Trump and his lawyer Rudy Giuliani. The footage turned out to be what’s termed a shallowfake, meaning that it was manipulated by a human without the help of AI. But this particular chunk of fake news should really ring alarm bells for anybody concerned about the sort of deepfake bile that will soon start swilling around the net, for three main reasons.

The first is that, whether knowingly distributed on the part of politicians like Trump or unknowingly on the part of his followers, it proves very clearly that our trust in the authenticity of a video being an accurate snapshot of reality is deeply misplaced, and that this leaves an opening for deepfakes to spread rapidly. Secondly, the video was a disfiguration by a human of something that already existed. It prompted many to question Pelosi’s drinking habits and the state of her health. So what if she was made to say something outrageous she never said? Or depicted falling off her chair? It’s easy to see the mischief that could run riot once AI enters the fray, and no one could know for certain what was true. And lastly, Facebook flat out refused to remove the video despite it being a proven as fake news, suggesting that it prioritises engagement and traffic ahead of things such as truth. None of this bodes well for a future where deepfakes become ever easier to produce and grow ever more believable.

Photo by Edward Webb / Wikimedia Commons (CC-BY-SA 3.0)

The arrival of deepfakes onto the block really could not have come at a worse possible time. It’s often said that we now live in a post-truth age, as social media has totally upended how we consume news. Trust in mainstream journalism is evaporating by the day; filter bubbles and content algorithms are remoulding our understanding of the world around us; bot accounts from overseas have poisoned the well of informed debate taking place online; and all the while fake news and conspiracy theories find a captive audience in people who cannot make sense of the confusion. These processes, separate as they are, play off each other and together form an overwhelming bundle of knots that will take considerable time for us to untangle. To then chuck deepfakes into the bubbling cauldron of this acidic online environment is to only stir the pot further, yanking us further into a media hellscape where we can’t be sure of what is real and what is made up.

This brings me on to the even scarier consequence of deepfakes, something coined the “Liar’s Dividend” by scholars concerned about where they may take us. Despite sounding like the title of a new Pirates of the Caribbean film, it’s really just a term used to describe what might happen if deepfakes manage to successfully eat away at our trust in what we see online. The active threat is hopefully clear by now: the full range of invented scenarios that can be magicked into being by this technology. But the passive threat is just as pernicious. If video can easily be faked and people’s trust that seeing is believing begins to crumble away, then very soon video evidence that, say, shows a Prime Minister hiding in a fridge to escape an interview, or a royal family member in the company of a known sex trafficker, could just as easily be dismissed by those involved as a forgery. The sort of people who would thrive in such a media ecosystem are those who are able and willing to play fast and loose with the truth – to manipulate reality to their own ends. This is the Liar’s Dividend.

As you can imagine this technology has caught the attention of governments and cybersecurity analysts worldwide, who are primarily worried about what sort of threat deepfakes could pose to national security, or what role they could play in cybercrime. The Pentagon has opened up its wallet to fund its research arm DARPA in developing accurate AI-powered tools that could help combat deepfakes online, but rather concerningly a game of cat and mouse seems to be emerging as developers of deepfakes find clever ways around detection through progressively more realistic generation techniques. For instance, early on researchers discovered that AI-produced faces did not blink, and this was seen as a silver bullet for detection software to identify the frauds, but it didn’t take too long before new videos started to surface with blinking and the playing field was tipped once again.

This heated arms race of producers versus detectors is the force that is driving these videos to greater and greater realism. @DeepTomCruise’s videos were ran past all commercial detection tools and yet none of them managed to flag them up as fakes. Considering that it’s been less than 4 years since the arrival of this technology, it’s safe to say that we’re stepping into a brave new world where nothing is what it seems. In 2019 a band of e-criminals successfully scammed an unwitting employee out of a quarter of a million in company funds under the instruction of synthetic audio modelled around the voice of his boss, in what is thought to have been the first time an AI has been involved in a heist. There is now a company which can animate old pictures of dead relatives for a small fee, and recently the late Frank Sinatra’s iconic baritone was resurrected by an algorithm trained on his lyrical content and sound, and made to sing Toxic by Britney Spears. If you’re paying attention to all these developments, you can’t escape the feelings of dread as we hurtle towards the great digital unknowns that are spilling out before us.

I’d love to believe that these developments eroding our faith in video evidence will cause us to recoil from our online info bubbles and seek out quality journalism and verifiable, offline sources, but recent history suggests this won’t be the case. See, there’s a fatal psychological flaw in how we all process information, and it’s that we actively seek out information that confirms our beliefs and reject that which doesn’t. Social media companies know this, and getting users hooked on the scroll means adopting a If You Liked That/You’ll Love This strategy when curating our feeds. The stuff that we agree with so often gets a free pass through our doubt filter, and it means that fake news can easily permeate online between members of friend groups, and this is a problem with both the right and the left. There’s a tendency to see fake news as something that other people fall for, but research has shown that we vastly underestimate our own susceptibility – something known as the third-person effect.

It’s this sort of complacency that deepfakes will exploit. The pandemic has been a case-in-point for the sorts of harm misinformation can inflict when gone unchecked online. There seems to be a curious consensus that it’s somehow your racist auntie on Facebook and other oldies like her who are most likely to respond to the beckoning call of a conspiracy web involving Bill Gates, 5G telecommunications and the Chinese Communist Party, but the facts don’t hold this up. It’s primarily young people who are vaccine hesitant, and there is a direct link between believing in misinformation and reliance on social media as a primary source of news (no surprises there). There’s some truth to the idea that it’s the Karens that lack the critical thinking skills necessary to not fall prey to fake news, but it’s evidently a problem unique to social media use which is overwhelmingly something us young people rely on to make sense of the world around us.

Our news consumption habits aren’t going anywhere, and now that the pandora’s box of deepfakes has been opened there is no way to close it; they are here to stay, and they will push the boundaries of realism in a way that will make @DeepTomCruise look like the Star Wars prequels do to us today. It’s terrifying to think of the power that a well-timed fake could wield: in tipping an election, pouring petrol onto the flames of an ethnic conflict or extorting and terrorising someone through blackmail. Even if they can eventually be proven as false, as David Doermann of the Artificial Intelligence Institute points out, “A lie can go halfway around the world before the truth can get its shoes on.” It’s important not to fearmonger but it’s also urgently important that we start engaging with discussions about deepfakes before they are a regular fixture nestled in amongst our newsfeeds.

We know that AI-driven solutions aren’t going to save us; they’re the exact reason that we’re in this mess. As deepfakes chip away at reality and colour grey the borders between fact and fiction, critical thinking skills should become our first line of defence against the onslaught of misinformation coming our way. Our education system can help with promoting this but ultimately it comes down to us all taking everything we see with a heaping pile of salt. With April Fools day approaching, I’m reminded of how it’s often said that it’s the one time of the year that people critically evaluate news articles before accepting them as true. With deepfakes on the scene, this kind of vigilance might be required year-round if we’re ever to have a chance of piecing our shattered reality back together.

Words by Charlie Forbes

Artwork by Katy Bremner (instagram: @katsbrems)

Sources, Further Reading and Other Resources:

The Guardian – ‘’I don’t want to upset people’: Tom Cruise deepfake creator speaks out’
Chris Ume – ‘The Chronicles of DeepTomCruise (Viral Tiktok Videos)’
WIRED – ‘The problem with deepfakes? People don’t care what’s real’
openDemocracy – ‘Deepfakes: The hacked reality’
Tech Monitor – ‘The Deepfake Threat’
Jan Kietzmann, Adam J. Mills & Kirk Plangger – ‘Deepfakes: perspectives on the future “reality” of advertising and branding’
Valarie Schweisberger, Jennifer Billinson & T. Makana Chock – ‘Facebook, the Third‐Person Effect, and the Differential Impact Hypothesis’
The Atlantic – ‘The Era of Fake Video Begins’
The Guardian – ‘Facebook refuses to delete fake Pelosi video spread by Trump supporters’
Al Jazeera – ‘Are deepfakes breaking our grip on reality?’ (video)
ExtremeTech – ‘How Deepfakes and Other Reality-Distorting AI Can Actually Help Us’
Accenture – ‘ALTERED REALITY: Staying Ahead of Manipulated Content’
Futurism – ‘If DARPA Wants To Stop Deepfakes, They Should Talk To Facebook And Google’
BBC News – ‘Fake Obama created using AI video tool’
Brookings – ‘Is seeing still believing? The deepfake challenge to truth in politics’
MIT Technology Review – ‘The US military is funding an effort to catch deepfakes and other AI trickery’
The Guardian – ‘What are deepfakes – and how can you spot them?’
Motherboard – ‘AI-Assisted Fake Porn Is Here and We’re All Fucked’
The Guardian – ‘’It’s the screams of the damned!’ The eerie AI world of deepfake music’
Vocal Synthesis – ‘Jay-Z raps the “To Be, Or Not To Be” soliloquy from Hamlet (Speech Synthesis)’

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: