Cast & Crew Blog

On the Horizon: Deepfake Part One

Written by Michael Consiglio | Jan 25, 2023 9:15:00 PM

 You didn’t see what you saw. 

Today, it’s not as hard as it should be to imagine a world where our very eyes can’t be trusted. Forget the echo chamber of competing voices vying for your attention and affirmation; we’re talking about technology that can quite literally conjure a fiction from the digital ether. Using advanced processes, creators can bend perception and reality to their will, and filmmakers in entertainment are primed to take full advantage of the tools at their disposal. Just as entertainment’s digital landscape was changed by the introduction of CGI, it is facing an upheaval thanks to the impending age of deepfake. 

What is deepfake, exactly? The overall concept refers to an AI-based process that combines existing media into something new. Sound complicated? Generally speaking, it refers to a process by which AI studies multiple visual and audio elements and learns how to combine them into a new, seamless experience. A human experience. This can include things like superimposing one person's face onto another person’s body or manipulating the sounds of a person's voice to emulate a president or movie star. Really, the application is what you make of it. In recent years, these deepfakes have become so flawless that it has become increasingly difficult to tell them apart from true native content. Discerning reality from fakery has become a complicated task. In the end, deepfake creators are trying to create a realistic human exchange that isn’t what it seems to be. The concept is bristling with creative opportunities, yet fraught with complications and some dangerous implications. 

Synthetic media and amalgamated processes have been around since the twentieth century, but current methods of deepfake creation began in the 1990s with the development of artificial intelligence (AI) technologies and deep learning (DL). Deep learning, a type of machine learning (ML) and artificial intelligence, is a process in which a computer acquires certain knowledge in a way that emulates humans. It’s an important element of data science used for predictive modeling and generating statistics. Most commonly, deepfake technology has been used to emulate public figures and celebrities, as there is a wealth of audio and visual data featuring them in the public space. The process uses AI and ML to sift through hundreds or thousands of minutes of content, collecting huge sums of data on a subject, such as their appearance, voice, and movement. Computers can then interpret that information into something new. This enhanced data set is then used to overlay the theoretical actor’s face (now known as a “wrapper”) onto the body of a different individual. The technique is so good, it can often recognize different common expressions made by the subject. Is the process flawless? Not exactly. But to the common eye, it doesn’t necessarily need to be. However, as the tech advances and deepfakes find their way into more aspects of everyday life, it’s going to be harder to tell apart native videos and those created in a computer.  

But, hey … seeing is believing. Just watch these videos of “Tom Cruise” (@deeptomcruise) on TikTok. (I know. Your mind is blown. Just remember, this is not real. Nope. That is not Tom Cruise, even though it looks exactly him. Repeat it again and again: That’s not Tom. That’s not Tom. That’s not T—) 


At this point, the conversation around deepfakes is a bit complicated, as the technology has somewhat of a baked-in reputation for a variety of reasons. As with any innovation, a lot depends on how people use the process. Let’s start with the good. Deepfakes and the deepfake process are finding a footing in the entertainment industry after years of CGI work that have left audiences wanting. Talented creators have long been attempting to repurpose and reconfigure onscreen performances (with mixed results). One of the most common techniques has been de-aging, a sort of digital facelift for actors of a certain age. Martin Scorsese’s The Irishman used Infared imaging to de-age his older actors without using face markers, and like with deepfake, past performances were utilized for reference when crafting each actor’s younger visage. The Mandalorian made waves with a scene featuring a young Luke Skywalker, which shaved decades off actor Mark Hamill’s face. In this instance, the CGI process they used was not entirely well received. Interestingly, a fan going by the name “Shamook” recreated the scene with deepfake technology in December of 2020, posting the video online and garnering 3 million views (plus a heap of praise from fans) in the process. Ghostbusters: Afterlife even used CGI recreation tech to bring departed actor Harold Ramis back to the silver screen, building an age-appropriate digital double from old footage. While these innovative techniques show promise, it is still remarkably hard to fool the human eye. Often, the conversation around these sequences tends to note their awkwardness and flaws. Deepfake techniques could change all of that. 

In many ways, the gaming world has been waiting for technology like this to become ubiquitous for decades. While it’s been possible to create an accurate avatar or facsimile of yourself for some time, deepfake offers diligent gamers the opportunity to truly inhabit these fictious worlds, putting their face on any digital character they want (imagine seeing yourself as Han Solo!). And the cinematic tide is certainly shifting in deepfake’s favor as well. Shamook, the digital creator who “fixed” Luke Skywalker’s Mandalorian cameo, was actually hired by Lucasfilm, and the character’s subsequent appearance in The Book of Boba Fett was a marked improvement. Meanwhile, South Park creators Trey Parker and Matt Stone have opened their own deepfake production studio, Deep Voodoo. Before going viral with the pandemic-era video titled “Sassy Justice,” the duo was reportedly developing a feature film that would “star” a completely deepfaked Donald Trump, though production stalled during lockdown due to the project’s timely nature. They kept their studio going, however, after securing $20 million in funding. In the documentary space, Mordan Neville’s new project about the late Anthony Bourdain features several lines of AI-generated dialogue recreating the chef’s recognizable voice. And after Val Kilmer lost his voice during his battle with throat cancer, AI helped bring it back for the film Top Gun: Maverick.  

Deepfake technology is continuing to make its mark on the small screen beyond Luke Skywalker’s appearances on Disney+. The startup Metaphysic recently made a splash on the popular television competition America’s Got Talent, by emulating the show’s hosts and educating millions of viewers about the technology they used to bring things like their popular Tom Cruise deepfakes to life. Creative types on the internet are also enjoying the possibilities of fan casting their favorite projects however they see fit. Sure, it’s fun to insert Harrison Ford into Solo: A Star Wars Story, but wouldn’t you like to see him in Gone with the Wind? One deepfake creator even managed to insert iconic TV star Lynda Carter into the latest Wonder Woman movie, setting the geek world on fire. And after this video featuring a completely constructed Morgan Freeman resurfaced on Twitter, it’s hard to imagine that there is any sort of limit on what filmmakers might be able to do in the future. Especially as demand and interest grows. In the independent filmmaking space, deepfake provides democratization and accessibility, with the technology readily available online. In fact, many small-scale creators are driving adoption, posting videos on YouTube and social media. Some do fan casting, others focus on de-aging existing clips, but each independent creator is pushing the technology while teaching themselves how to incorporate top-tier VFX and techniques into their work. 

Beyond its creative application, deepfake is demonstrating the potential to reimagine production processes and life on set. Producers might, for example, use digital constructs to address potential scheduling conflicts with actors who can’t return for reshoots or additional dialogue recording. This can greatly reduce costs and improve a project’s health. Producers of 2017’s Justice League famously spent $25 million on reshoots and additional photography, including the digital removal of actor Henry Cavill’s mustache. An effect that went over … you know … not so well. Deepfake could have saved a significant amount of money and produced a much more creatively satisfying result. There are even potential applications in marketing being considered; opportunities to personalize film trailers in a way that engages audiences in more direct ways. Take startup company D-ID's work on the film Reminiscence, where deepfake tech placed you as the audience member into the trailer. Yes, there’s money in magic. And while specific deepfake numbers are hard to come by, the deep learning market is showing considerable signs of growth. In 2021, the entire deepfake economy—including hardware, software, and services—was valued at $34.8 billion and expected to grow to 49.6 billion in the near future. By 2030, the market is projected to reach a whopping $524.92 billion, with a CAGR of 34.3% from 2022 to 2030.   

Of course, every coin has two sides. Over the last few years, political discourse and societal debates have brought concerns about deepfake technology to the forefront. For all its amazing creative applications, is it more likely that those with malicious intent will use this innovative process to sow the seeds of discord? Find out more in part two of our look at deepfake.