What Are Deepfakes And Why Should I Worry About Them?
This video is a lighthearted look at what Deepfake Videos can be. But read more about the potential harm these types of videos can cause. Imagine, for example, receiving a video from a grandchild asking for help, and while it looks just like the child, it is a deepfake video designed to get you to send money.
What is Deepfake Technology?
Deepfake technology enables anyone with a computer and an Internet connection to create realistic-looking photos and videos of people saying and doing things that they haven’t said or done.
A combination of the phrases “deep learning” and “fake”, deepfakes first emerged in late 2017, powered by an innovative deep learning method known as generative adversarial networks (GANs).
Several deepfake videos have gone viral recently, giving millions around the world their first taste of this new technology: President Obama using an expletive to describe President Trump, Mark Zuckerberg admitting that Facebook’s true goal is to manipulate and exploit its users, and Bill Hader morphing into Al Pacino on a late-night talk show.
The amount of deepfake content on the internet is growing rapidly. At the beginning of 2019 there were nearly 8000 deepfake videos online, according to deeptracelabs.com; just nine months later, that figure had jumped to 14,678. It has no doubt continued to balloon since then.
While impressive, it is typically possible to tell that a video is a deepfake. But the technology is improving at a breathtaking pace and experts predict that deepfakes will be indistinguishable from real images before long. In the months and years ahead, deepfakes threaten to grow from an Internet oddity to a widely destructive political and social force. Society needs to act now to prepare itself.
When Seeing Is Not Believing
From these dark corners of the web, the use of deepfakes has begun to spread to the political sphere, where the potential for mayhem is enormous.
It does not require much imagination to grasp the harm that could be done if entire populations can be shown fabricated videos that they believe are real. Imagine deepfake footage of a politician engaging in bribery or sexual assault right before an election; or of U.S. soldiers committing atrocities against civilians overseas; or of a president launching missiles against North Korea. The consequences could be catastrophic.
Because of the technology’s widespread accessibility, such footage could be created by anyone: state-sponsored actors, political groups, lone individuals.
This risk is no longer just hypothetical: there are early examples of deepfakes influencing politics in the real world. Experts warn that these incidents are canaries in a coal mine.
Last month, a political group in Belgium released a deepfake video of the Belgian prime minister giving a speech that linked the COVID-19 outbreak to environmental damage and called for drastic action on climate change. At least some viewers believed the speech was real.
The emergence of deepfakes will make it increasingly difficult for the public to distinguish between what is real and what is fake, a situation that political actors will inevitably exploit—with potentially devastating consequences.
“People are already using the fact that deepfakes exist to discredit genuine video evidence,” said USC professor Hao Li. “Even though there’s footage of you doing or saying something, you can say it was a deepfake and it’s very hard to prove otherwise.”
In a world in which seeing is no longer believing, the ability for a large community to agree on what is true—much less to engage in constructive dialogue about it—suddenly seems precarious.
Given that deepfakes are based on AI in the first place, some look to AI as a solution to harmful deepfake applications. For instance, researchers have built sophisticated deepfake detection systems that assess lighting, shadows, facial movements, and other features in order to flag images that are fabricated. Another innovative defensive approach is to add a filter to an image file that makes it impossible to use that image to generate a deepfake.
A handful of startups have emerged that offer software to defend against deepfakes, including Truepic and Deeptrace.
Looking beyond purely technological remedies, what legislative, political, and social steps can we take to defend against deepfakes’ dangers?
One tempting, simple solution is to pass laws that make it illegal to create or spread deepfakes. But a blanket deepfake ban has both constitutional and practical challenges. The First Amendment of the U.S. Constitution enshrines the freedom of expression. Any law proscribing online content, particularly political content, risks running afoul of these constitutional protections.
Beyond constitutional concerns, deepfake bans will likely prove impracticable to enforce due to the anonymity and borderlessness of the Internet.
Other existing legal frameworks that might be deployed to combat deepfakes include copyright, defamation, and the right of publicity. But given the broad applicability of the fair use doctrine, the usefulness of these legal avenues may be limited.
In the short term, the most effective solution may come from major tech platforms like Facebook, Google and Twitter voluntarily taking more rigorous action to limit the spread of harmful deepfakes.
Relying on private companies to solve broad political and societal problems understandably makes many deeply uncomfortable. Yet as legal scholars Bobby Chesney and Danielle Citron put it, these tech platforms’ terms-of-service agreements are “the single most important documents governing digital speech in today’s world.” As a result, these companies’ content policies may be “the most salient response mechanism of all” to deepfakes.
In the end, no single solution will suffice. An essential first step is simply to increase public awareness of the possibilities and dangers of deepfakes. An informed citizenry is a crucial defense against widespread misinformation.
A Post-Truth World
The recent rise of fake news has led to fears that we are entering a “post-truth” world. Deepfakes threaten to intensify and accelerate this trajectory. The next major chapter in this drama is likely just around the corner: the 2020 elections. The stakes could hardly be higher.
“The man in front of the tank at Tiananmen Square moved the world,” said NYU professor Nasir Memon. “Nixon on the phone cost him his presidency. Images of horror from concentration camps finally moved us into action. If the notion of not believing what you see is under attack, that is a huge problem. One has to restore truth in seeing again.”
To read the full article by Rob Toews for Forbes.com with more technical information and world-life examples, click here.
Adapted from an article by Forbes.com; cover image from Yahoo Finance; video from youtube.com