Deepfakes Versus Democracy

Designed by Crystal Nattoo

Deepfakes can render elections meaningless while also dismantling the accountability and governance structures that are central to a vibrant democracy.

As November draws closer, the threat of disinformation is at the forefront of voters’ minds. Americans are now more concerned about “made up news and information” than they are about violent crime, climate change, racism, and illegal immigration.1

What is disinformation and why is it so alarming?2 Disinformation refers to false information created with malicious intent, usually in the form of news stories, social media posts, and advertisements. Disinformation operations know no boundaries. Over the past few years, we have witnessed Facebook posts fuel the ethnic cleansing of the Rohingya in Myanmar,3 anti-migration advertisements normalize xenophobia in Hungary,4 and Russian operatives cast doubt on the legitimacy of US elections.5 Looking at the disruptive capacity of the present state of disinformation raises a grim question: if disinformation today can be so harmful, what will disinformation operations of tomorrow accomplish?

While it is impossible to sketch out an exhaustive picture of the future of disinformation, it is possible to use new technologies that have begun to transform our information ecosystem to glean an insight. In this article, I take a look at one such technology that I expect will pose a serious threat to democracy in the coming years. 

Real People, Fictional Content

Deepfakes are realistic video and audio files generated through deep learning, a subset of machine learning in artificial intelligence. I will focus primarily on deepfake videos, which are created through face swapping (placing someone’s face onto another person’s body), lip syncing (placing someone else’s mouth or mouth movements onto another person’s face), and re-enactment (manipulating someone’s movements and speech).6 Deepfakes should not be confused with “shallow fakes,” which are made using simpler video manipulation techniques like speed and audio manipulation. The viral video of Nancy Pelosi “slurring” her words was a shallow fake, created by reducing the speed of the original video by 25% and normalizing her pitch.7

It is quite easy to create deepfakes of public figures since there is an abundance of training material. As a result, some of the best known deepfake videos include Mark Zuckerberg boasting about how powerful he is,8 Donald Trump declaring that Jeffrey Epstein did not kill himself,9 and Barack Obama calling Trump a “total and complete dipshit.”10 What’s more, once a controversial deepfake is out there, it spreads like wildfire due to a number of cognitive phenomena that make us more likely to engage with, believe, and share disinformation.11 

The incendiary intersection of deepfakes, disinformation, and democracy is an incredibly timely topic. Deepfake technology is becoming more advanced and accessible at a time when major elections are on the horizon and political tensions are running high, partly as a result of the socioeconomic challenges exacerbated by the coronavirus pandemic. Democracies are under a lot of stress, which makes them an easy target for malicious interference—especially in the electoral process. Americans are aware of this fact and understandably alarmed about the upcoming election: 41% of them believe that the United States is not prepared to keep the presidential election safe and secure.12 Deepfakes will not only intensify this rampant sense of electoral distrust but also imperil the future of democracy. 

Free and Fair Elections

At its core, democracy is a system of government in which people can choose their leaders through free and fair elections; if a country’s elections do not meet these criteria, that country ceases to be a democracy. By threatening the free and fair conduct of elections, deepfakes pose an existential threat to democracy.  

The key electoral freedom at stake when it comes to deepfakes is the freedom to campaign: candidates, parties, and their supporters must be able to campaign freely, and voters must be able to access the information they put out in order to decide who to vote for on the basis of generally truthful and transparent information. Precisely because they are so deluding, deepfake videos can be used to discredit candidates on false grounds, thereby threatening citizens’ ability to accurately evaluate candidates’ characters and claims. For instance, a video can depict a politician taking bribes from a foreign government or show members of a party fraternizing with extremist groups. Especially when these realistic looking videos are released in a climate of polarization, confusion, and distrust, they are difficult to refute. Similarly, deepfakes can be used to intimidate or blackmail competitors: a candidate can be told that if they do not support a certain policy or drop out of the race, a deepfake sex tape of them will be released. Smear campaigns and coercion facilitated by deepfakes threaten candidates’ ability to campaign freely and meaningfully, voters’ ability to make informed decisions, and the exchange of ideas that is so crucial to free elections. 

The impact of deepfakes is not limited to the free nature of elections, as deepfake videos can also violate the fairness of elections, which primarily concerns people’s ability to vote. The electoral consequences of deepfakes extend beyond campaigns and into the voting booth. This new technology can facilitate voter suppression by spreading false information about how and when to vote. We witnessed disinformation target voter turnout in 2016, when disinformation targeting Democrats on Twitter encouraged them to vote by text.13 It is not difficult to imagine deepfakes of news anchors or local officials doing the same or telling people that the voting window has been extended. Ideally, people should not just be able to vote; they should be willing to do so. Deepfakes may lower voter turnout by increasing apathy and disillusionment. When people feel unable to differentiate between fact and fiction or understand what they are actually voting for, they can disengage from politics altogether.14 For instance, almost half of Americans have decreased their news intake due to concerns over made-up news.15 The fact that 90% of US adults believe that altered or made-up videos and images engender confusion does not inspire confidence in the future of political engagement.16 At the end of the day, every lost vote chips away at the legitimacy of democracy.

Meaningful Elections and Good Governance

The end of well-conducted elections marks the beginning of a much more existential struggle: remaining a functional democracy that genuinely empowers winners of elections to govern while staying accountable to the people. Unfortunately, history is littered with examples of countries capitulating to democratic backsliding. Deepfakes will contribute to this descent into authoritarianism by exacerbating governance and accountability challenges.         

It goes without saying that some degree of polarization is a prerequisite for democracy: if everyone agreed on everything, there would be no need for democratic institutions that facilitate preference aggregation to define a “public will.”  However, when a society becomes extremely polarized, the norms and practices that are essential for the smooth functioning of democracy—civility, mutual respect, tolerance and so on—are ruptured; common ground disappears, legislative gridlock becomes the norm, and politics becomes a hostile zero-sum game between “us” and “them.”17 This creates the perfect opening for malign actors looking to throw democracy into disarray by using disinformation to amplify polarization—an opportunity the Russians were keenly aware of in the run up to 2016.18 Deepfakes will worsen this dysfunction by making disinformation even more convincing: just imagine the polarizing impact of a deepfake of a white police officer shooting a black man while yelling racial slurs, claiming that it is retaliation against the Black Lives Matter movement. 

Deepfakes can also be used to legitimize and mobilize extreme, conspiratorial groups. In 2016, Edgar Welch walked into Comet Ping Pong pizzeria armed with an AR-15 because he believed the QAnon conspiracy that Hillary Clinton and the rest of the “deep state” were running a child sex ring out of the restaurant. The Pizzagate conjecture was based on WikiLeaks emails that conspiracists believed to be written in code.19 What would have happened if these claims were accompanied by a deepfake of Clinton confirming the Pizzagate accusations? 

More broadly, deepfakes can result in the disappearance of a single agreed upon reality by lending credence to fringe beliefs that would be dismissed in a regularly functioning evidence-based marketplace of ideas. The normalization of these niche, factually incorrect positions can affect bigger and more consequential policy areas. As truth loses ground to conspiracy and subjectivism, consensus becomes harder to achieve. For example, no country can address climate change or vaccine-preventable deaths without near unanimity surrounding the existence of these issues. Unfortunately, unfounded points of view disputing the veracity of both of those matters have been gaining traction and may be further legitimized by deepfake videos of prominent scientists arguing global warming is fake or doctors declaring vaccines harmful. Excessive internal discord bolsters the authoritarian “law and order” argument that democracies are fundamentally inefficient and ineffective at “getting things done.”  

Not all governance challenges raised by deepfakes are directly related to policymaking. There are also issues rooted in distrust: deepfakes can undermine governance capabilities by reducing the public’s trust in government officials and the information they put forward. One of the government’s essential tasks is the provision of information, which helps societies overcome collective action challenges and makes daily life more predictable. The pandemic underlined the significance of this duty, as countries with the best responses were often the ones whose governments excelled in communicating with the public.20 Leaders that flattened the curve by acting and communicating quickly and decisively were rewarded with skyrocketing approval ratings.21 However, no matter how effectively a government conveys information, deepfakes can undermine the public’s confidence in government messaging as a whole. What would happen to this newfound trust and approval of governing parties if a wave of deepfake videos revealed heads of state acknowledging the dangers of COVID-19 in January but deciding to downplay the virus to protect their own interests? The public would be reluctant to believe authorities ever again. This disruption of trust plays right into the hands of cynical anti-establishment autocrats and populists who claim that mainstream politicians are self-interested liars. 

A final component of good governance is accountability. Elections are a key accountability mechanism because they give people the ability to express their approval or disapproval of the current government. However, they only happen once every few years; in the meantime, other mechanisms ensure that leaders do not rule unchecked. These accountability structures (also known as checks and balances) allow independent actors like courts and civil society organizations to ensure that the government remains responsive to societal demands. These oversight institutions can find themselves discredited or silenced through the same deepfake tactics used to slander politicians. More controversial organizations like Planned Parenthood (PPFA) are particularly at risk. Just a few years ago, disinformation in the form of heavily edited videos that allegedly feature PPFA executives talking about trafficking fetal tissue went viral;22 if there was a deepfake of the organization’s leaders discussing such matters, PPFA would be subject to vicious attacks and the world would lose a major defender of reproductive rights. 

Deepfakes will not only introduce false content to the information environment, but also weaken the value of true content. This is particularly true in the case of the press, another crucial check on elected officials. Realistic fabricated videos will make journalists’ lives significantly harder by casting doubt on the authenticity of the tips they receive. Newspapers around the world have made history by releasing damning video evidence of government misbehavior. Just last year, two German media outlets published a video showing Deputy Chancellor and Freedom Party of Austria leader Heinz-Christian Strache discussing exchanging public contracts for campaign support with a woman who identified herself as the niece of a Russian oligarch.23 The release of the video resulted in protests, the collapse of the governing coalition, early elections, and Strache’s resignation from both posts. What would have happened if the recording turned out to be a deepfake or the newspapers refused to publish it because of that exact concern? Relatedly, deepfakes can make it easier to evade accountability as a result of the “liar’s dividend,” a term that describes people discrediting genuine evidence by claiming that it is fake. We have already seen this happen: in 2017, Trump suggested that the tape of him making vulgar sexist comments was fake.24 The following year, he alleged that the interview where he admitted to firing James Comey because of “this Russia thing” was “fudged.”25 The liar’s dividend will become a more credible defense mechanism as convincing synthetic media becomes more commonplace. In sum, deepfakes can render elections meaningless while also dismantling the accountability and governance structures that are central to a vibrant democracy. 

Proposed Solutions

Any solutions that seek to address the harms of deepfakes must do so without sacrificing the benefits of this new technology or imposing an unreasonable constraint on people’s freedom of expression.26 It is important to note that there are deeper, underlying issues that make many democracies particularly susceptible to the nefarious use of deepfakes. There is no silver bullet for extreme polarization or distrust in government; these are profound challenges that will require substantial changes to overcome. In the meantime, we must remain vigilant against threats that can be addressed more rapidly.

Technological problems often require technological solutions and deepfakes are no exception. In order to assess the nature of the deepfake dilemma, we must be able to detect deepfakes. Over the past few years, tech companies and educational institutions have upped their deepfake detection research and development, experimenting with AI-based detection systems similar to those used to identify copyright infringement or hate speech.27 Social media companies, which are at the frontlines of the fight against deepfakes, have started developing policies around removing and labeling synthetic and manipulated media.28 

While these developments are encouraging, they still face certain limitations, beginning with numbers. Although detection efforts have progressed over the years, they have not kept pace with advances in deepfake technology, as “the number of people working on the video-synthesis side, as opposed to the detector side is 100 to 1.”29 Furthermore, not all companies are on the same page about removing or labeling manipulated media; even if they were, they may be too slow to do so. Concerns over response time have given rise to proposals around dealing with deepfakes during high-stress situations like elections or unconfirmed breaking news. Promising suggestions include the establishment of a multi-stakeholder anti-deepfake SWAT team with a 24/7 hotline and the implementation of firebreaks (similar to media blackouts on election days) during which all uploaded videos are automatically screened for manipulation prior to being published.30 

The public sector should support technical solutions through investments (e.g. the European Union’s grant to In Video Veritas) and its own contributions, as exemplified by the Department of Defense’s new Semantic Forensics program.31 These efforts should be complemented by media literacy initiatives and legislation. There are barely any laws that directly address deepfakes. Some states have attempted to remedy this legal vacuum by passing legislation targeting the use of deepfakes in high pressure situations, beginning with elections. California has outlawed the distribution of audio or video files giving a false or damaging impression of a politician within 60 days of an election32 and Texas has criminalized the creation and distribution of deepfakes with the intent to harm a candidate or influence the outcome within 30 days of an election.33 On the federal level, there are numerous bills seeking to increase deepfake-related research and reporting and one (the DEEPFAKES Accountability Act) that would criminalize the generation and circulation of a deepfake without disclosing its fraudulence.34 It will be interesting to see how these legislative efforts stand up to First Amendment challenges and grapple with issues around attribution and jurisdiction.35 There have also been discussions around amending Section 230 of the Communications Decency Act, which absolves social media companies of legal responsibility regarding the content published on their platforms, to make companies liable for the spread of malign deepfakes on their platforms unless they have made a reasonable effort to prevent it.36 

There are many interesting solutions on the table, each with its own pros and cons. However, one thing is certain: dealing with the constantly evolving challenges raised by deepfakes will require academics, activists, engineers, and legislators alike to innovate and do so quickly and collaboratively.

Conclusion 

Deepfake technology gives people the ability to generate untruthful and deceptive videos with unprecedented persuasiveness, which represents a qualitative leap from the lies and distortions of the current political environment. As John Villasenor explains, “Historically, misinformation in politics involved saying or writing something about someone else. With deepfakes, attackers cause their targets to become agents of their own subversion.”37 The weaponization of deepfakes jeopardizes one of the fundamental institutional requirements of democracy—free and fair elections—and threatens governance and accountability mechanisms necessary for its smooth functioning. More sinisterly, deepfakes have the power to rupture the shared reality that is vital to the achievement of societal peace and progress. The public’s loss of faith in objective truth, which is already underway in too many liberal democracies across the world, does not bode well for the future of democracy. In the words of Hannah Arendt, “[A] people that no longer can believe anything cannot make up its own mind. It is deprived not only of its capacity to act but also of its capacity to think and to judge. And with such a people you can then do what you please.”38

Rewired is a digital magazine where technology and society meet. We’re committed to curating stories that amplify diverse perspectives and bridge disciplines.
Sign up to receive updates about upcoming issues and submission openings via email.