Cast & Crew Blog

On the Horizon: Deepfake Part Two

Written by Michael Consiglio | Feb 8, 2023 6:45:00 PM

 What do you believe when you can no longer believe your own eyes?  

There has been a great deal of debate over the past few years about some of the moral implications of using deepfake processes for different ends. Beyond the creative potential of the technology that we discussed in part one of our piece, there are also some severe indications of danger and potential malfeasance. 

Inevitably, this powerful technology raises a host of concerns about privacy and security. There are ways to use it productively, as in recent HBO documentary Welcome to Chechnya, where deepfake “wrappers” were used to protect the identities of LGBTQIA+ activists facing execution in certain regions hostile to their lifestyle. However, the rising chorus of concerns is drowning out most optimistic voices. Many unions have already made it clear that they are firmly against the use of deepfake. Equity, the U.K. performing arts workers union, already launched the “Stop AI Stealing the Show” campaign, positing that the technology could compromise a performer’s ability to earn a living. If AI allows producers to steal, recreate, and manipulate an individual’s unique assets without paying them (or purposefully underpaying them), then seismic conflicts are certainly inevitable. SAG-AFTRA agrees, and they’ve taken a firm stand on behalf of actors, arguing that the performers must be the ones controlling their own image and how it’s used publicly.   

And while this technology benefits from its democratization, its ubiquity also means that anyone (with any intent) can manipulate a version of reality to their own end. Deepfakes are very easy to create, once the process is understood, so introducing manipulated sounds, images, and videos into already-contentious situations (such as the current political landscape) can be like tossing a match into oil. Ill-intentioned political advocates can theoretically make their opponents do or say anything they like, harming reputations and spreading false information. With that comes the ability for some dishonest people to deny that they ever did or said something that was, in fact, done or said. It’s quite easy to poison the well when most people trust their eyes and ears. And as these deepfakes become more advanced, it will be increasingly difficult to spot them. Optimistically, knowledge of this fact could add a much-needed boost to media literacy to our society like the way most people had to understand that not every photo they see in a post-Photoshop world is legitimate. However, the potential destruction of trust in media could hurt the effectiveness of legitimate videos and messaging, which would have a devastating effect on things like the world of politics. 


So, how do we counter this trend? Passing laws might be part of the solution. In November of 2020, New York state put a law into effect that bans the use of a person’s likeness and characteristics in digital spaces for 40 years after their death, as such use would most likely lead viewers to assume that it was an authorized reproduction. Texas enacted a similar law in 2019, banning the spread of deepfake videos that were intended to damage political candidates or sway voters within 30 days of an official election. Following Texas’ law, California passed one of its own, expanding the window to 60 days.
 

While the existential sway of the voting class is a major quandary, some concerns are a bit more direct. Take basic criminality, for example. Recently, scammers swindled $35 million from a bank by using deepfake and AI to clone the voice of an executive and forcing a money transfer. And speaking to humanity’s baser instincts, deepfakes in pornography are already causing great strife. Recent reporting from DeepTrace labs showed that 96% of the 14,000 deepfake videos circulating around the internet as of September 2019 were pornographic in nature. These fakeries can cause serious personal, professional, and mental health traumas that can completely upend a person’s world.  

The larger question about deepfake remains: is it ethical to speak for someone else without their consent or knowledge? Should anyone with deepfake technology be allowed to put something into the public square that has the potential to be perceived as authentic when it’s not? From there, the questions fall faster and faster. Does that calculus change if they are a public figure, or must the veil of privacy be drawn out farther? And what of actors or personalities who are no longer with us? Are the dead up for grabs? Can Cary Grant still star in a 2023 summer blockbuster despite having died in 1986? Do Marlon Brando’s career choices no longer matter to a filmmaker in the twenty-first century who wishes to use him to their advantage? Deepfake creators quite literally put words in the mouths of public figures, and surely that must be regulated to some degree. Cineverse, an up-and-coming streaming network, has tossed around the idea of developing specialized channels for performers like Elvis and Bob Ross, showcasing their existing libraries while also creating deepfake content to supplement the archives. To their credit, Cineverse has made a point of conferring with those people who oversee each performer’s copyrights and estates. Of course, some performers are likely to embrace this new concept whole hog. There was some debate as to whether Bruce Willis licensed his likeness to a Russian mobile service for a series of 2021 commercials (the actor claims he did not). Regardless, the ads were made without the actor ever stepping foot on a set or in a dressing room. While there is currently no regulatory framework in place to allow people to sell their ID rights, performers like Willis can still license them out on an individual basis. This opens a whole new world to performers, whether they’re considering retirement or facing issues that hinder their ability to work, by allowing them to cement their legacies and earn continued income. 

There is also an artistic question to consider: should a piece of work be manipulated by someone other than the artist who created it? Many cineastes were up in arms when George Lucas decided to revisit his classic Star Wars trilogy 25 years ago, arguing that the work was set, and it should not be touched. In that instance, however, it was the film’s creator making the adjustments, so one could argue that it’s his prerogative. Deepfake technology can also be used to colorize classic films that were shot in black and white. Do we need a brightly colored version of Psycho, or should Alfred Hitchcock’s intentions be considered? AI can also be used to generate subtitles and dub international language tracks, but shouldn’t the original performer and filmmakers be involved in that process? But you can’t stop progress, as the saying goes, so it’s possible that AI artists may be as common on the movie sets of the future as VFX artists are today.  

Will the spread of doctored videos lead to the collapse of society as the masses begin to believe everything they see on YouTube? Probably not. Still, this deepfake technology has the potential to alter the very perception of reality in a scalable way. While it is natural human inclination to worry about a theoretical world where AI starts to create its own videos to manipulate humanity (sounds like a new Terminator film, huh?), there is also a great deal of potential for the emboldening of positive, creative pursuits. There is a long list of technologies that have entered the cinematic conversation only to become commonplace. Don’t forget, it was once unthinkable that actors would speak in a moving picture! In the end, deepfake technology is a tool—a powerful tool—and its legacy will be what we make of it.