What do you believe when you can no longer believe your own eyes?
There has been a great deal of debate over the past few years about some of the moral implications of using deepfake processes for different ends. Beyond the creative potential of the technology that we discussed in part one of our piece, there are also some severe indications of danger and potential malfeasance.
Inevitably, this powerful technology raises a host of concerns about privacy and security. There are ways to use it productively, as in recent HBO documentary Welcome to Chechnya, where deepfake “wrappers” were used to protect the identities of LGBTQIA+ activists facing execution in certain regions hostile to their lifestyle. However, the rising chorus of concerns is drowning out most optimistic voices. Many unions have already made it clear that they are firmly against the use of deepfake.
And while this technology benefits from its democratization, its ubiquity also means that anyone (with any intent) can manipulate a version of reality to their own end. Deepfakes are very easy to create, once the process is understood, so introducing manipulated sounds, images, and videos into already-contentious situations (such as the current political landscape) can be like tossing a match into oil. Ill-intentioned political advocates can theoretically make their opponents do or say anything they like, harming reputations and spreading false information. With that comes the ability for some dishonest people to deny that they ever did or said something that was, in fact, done or said. It’s quite easy to poison the well when most
So, how do we counter this trend? Passing laws might be part of the solution. In November of 2020, New York state put a law into effect that bans the use of a person’s likeness and characteristics in digital spaces for 40 years after their death, as such use would most likely lead viewers to assume that it was an authorized reproduction. Texas enacted a similar law in 2019, banning the spread of deepfake videos that were intended to damage political candidates or sway voters within 30 days of an official election. Following Texas’ law, California passed one of its own, expanding the window to 60 days.
While the existential sway of the voting class is a major quandary, some concerns are a bit more direct. Take basic criminality, for example. Recently, scammers swindled $35 million from a bank by using deepfake and AI to clone the voice of an executive and forcing a money transfer. And speaking to humanity’s baser instincts, deepfakes in pornography are already causing great strife. Recent reporting from DeepTrace labs showed that 96% of the 14,000 deepfake videos circulating around the internet as of September 2019 were pornographic in nature. These fakeries can cause serious personal, professional, and mental health traumas that can completely upend a person’s world.
The larger question about deepfake remains: is it ethical to speak for someone else without their consent or knowledge? Should anyone with deepfake technology be allowed to put something into the public square that has the potential to be perceived as authentic when it’s not? From there, the questions fall faster and faster. Does that calculus change if they are a public figure, or must the veil of privacy be drawn out farther? And what of actors or personalities who are no longer with us? Are the dead up for grabs? Can Cary Grant still star in a 2023 summer blockbuster despite having died in 1986? Do Marlon Brando’s career choices no longer matter to a filmmaker in the twenty-first century who wishes to use him to their advantage? Deepfake creators quite literally put words in the mouths of public figures, and surely that must be regulated to some degree. Cineverse, an up-and-coming streaming network, has tossed around the idea of developing specialized channels for performers like Elvis and Bob Ross, showcasing their existing libraries while also creating deepfake content to supplement the archives. To their credit, Cineverse has made a point of conferring with those people who oversee each performer’s copyrights and estates. Of course, some performers are likely to embrace this new concept whole hog. There was some
There is also an artistic question to consider: should a piece of work be manipulated by someone other than the artist who created it? Many cineastes were up in arms when George Lucas decided to revisit his classic Star Wars trilogy 25 years ago, arguing that the work was set, and it should not be touched. In that instance, however, it was the film’s creator making the adjustments, so one could argue that it’s his prerogative. Deepfake technology can also be used to colorize classic films that were shot in black and white. Do we need a brightly colored version of Psycho, or should Alfred Hitchcock’s intentions be considered? AI can also be used to generate subtitles and dub international language tracks, but shouldn’t the original performer and filmmakers be involved in that process? But you can’t stop progress, as the saying goes, so it’s possible that AI artists may be as common on the movie sets of the future as VFX artists are today.
Will the spread of doctored videos lead to the collapse of society as the masses begin to believe everything they see on YouTube? Probably not. Still, this deepfake technology has the potential to alter the very perception of reality in a scalable way.