Pandora’s Bots: Cheaper than Free Speech?

What bots can teach us about free speech

Designed by Kaylie Mings

Do traditional speech theories hold up against new speech technologies?

In the land of the free, speech is king. From flag waving to flag burning, the American First Amendment offers the strongest protections worldwide for citizens to participate in democratic deliberation without fear of government censorship. Despite America’s fabled tradition of permissiveness towards free speech, the public spaces of 2020 — that is, digital forums and social media platforms — leave much to be desired as sites of political dialogue. As new technologies mediate our civic life, we must also wonder how our ideas of free speech offline should translate to emerging digital spaces. Do traditional speech theories hold up against new speech technologies?

Nowhere is this question more salient than in recent issues concerning the speech of bots. These talkative automatons have mounted a stunning ascent in recent years, inundating social media feeds, peddling questionable information, and even tampering with our government. One particularly egregious case of the latter occurred in the summer of 2017, when Americans appeared to mobilize en masse in support of the Federal Communications Commission’s repeal of net neutrality, a regulation that requires internet service providers (ISPs) to treat all websites and online services equally with regards to connection speed and access.1 Out of the 22 million comments submitted to the FCC, nearly all were fervently anti-net neutrality.2 For such a polarizing debate, the strikingly uniform consensus of the FCC comments came as a surprise to spectators on both sides of the aisle. So too did the discovery that 80% of these Americans did not exist.3

A substantial proportion of the comments in the FCC case were submitted not by concerned citizens but by unthinking bots.4 What appeared to have been a genuine uprising of public opinion, fueling the most voluminous regulatory comment period to date — more than all previous periods from all government agencies combined — was in reality a cause for genuine democratic concern. Such behavior is unfortunately not isolated to a single incident. From Russian tampering in the 2016 United States presidential elections5 to claims that nearly half of the accounts tweeting about COVID-19 are controlled by algorithms,6 bots are staking a controversial position in the ever-expanding digital public sphere.

Unsurprisingly, government regulation of bot speech online is a point of contention, one that serves as a proxy battleground for fundamental tensions in our understanding of the value of free speech. Should our speech protections — long heralded as fundamental to life, liberty, and the pursuit of happiness — apply to bots? The dilemma of bot speech is an experiment that probes the hidden ambiguities surfaced by the interplay between new technologies and traditional theories. How we respond to such tensions holds vital implications for our understanding of speech in the 21st century.

During the expansion of the internet in the 1990s, its early proponents such as John Perry Barlow promised the advent of a utopian democracy: a liberated realm of open dialogue, networked communities, and grassroots organizing free from the “weary giants of flesh and steel.”7 As a decentralized architecture, the internet, by its very design, would disintegrate the oppressive and hierarchical authority present in the physical world. Anyone who has taken a glance at political discussions on Twitter, Facebook, or Instagram in the past few months understands that this envisioned future has not come to fruition: our digital public sphere is perhaps best described as barrelling towards dystopia. What Barlow and other early internet advocates overlooked, and what automated dialogue from bots now makes unavoidable, is the new role that speech plays in the digital context.

The classical liberal speech theories that undergird American speech doctrine were proposed during an era when speech was scarce and attention was abundant. John Stuart Mill published his renowned defense of free speech On Liberty in the speech conditions of the mid-19th century.8 During Mill’s day, sharing one’s opinion required first being literate and then being resourceful: public speech typically required oratorical demonstrations in the town square or eloquent injunctions in print pamphlets. The bottleneck for deliberation was the scale and speed at which political dialogue could disseminate. Because political speech was relatively limited, literate citizens had time to process and reflect upon divergent arguments. As such, Mill and many of his fellow speech proponents presuppose an information environment where speech, and not attention, is the limiting factor for democratic deliberation.9 In this particular marketplace of ideas, it is the speakers that are rare. 

Today, however, speakers and their speech are a dime-a-million. Meanwhile, attention has become the hotly-contested resource responsible for the rise of trillion dollar corporations. In Mill’s time, our current informational glut would have been unimaginable due to the difficulty of propagating speech. However, the advent of social media and wireless internet has erased the logistical barriers to speaking, allowing anyone to speak their minds with a single click. The result is an asymmetry between abundant speech and limited attention. Given that attention is already stretched thin by technology, would protecting the free speech rights of bots merely exacerbate an existing information overload? 

Perhaps the easy answer would be to eschew protections for bot speech entirely. Why should inanimate algorithms deserve the right to speak? However, denying speech protections for bots is a sweeping act that ignores many benefits that bots offer democracy now and in the future. It is not difficult to imagine bots that offer concise news summaries, artificially-generated art, or even trenchant critiques of government officials. Refusing bots the protection of free speech not only limits the horizon for future innovations, it also runs against the ideals of almost all traditional free speech theories. Considering the justifications that classical liberal philosophers like Mill have espoused and that American courts have repeatedly upheld, there is little reason to suspect that bot speech would not receive First Amendment protection. Examining why this is the case reveals deep-rooted judgements about the value of speech, judgements that may be challenged by new technologies.

First Amendment Protections for Bots

The idea of bots receiving First Amendment rights — or any sort of rights — may seem strange, perhaps even amusing. This reaction stems from our intuitive defense of free speech, which rests upon a speaker-based justification. Speaker-based justifications claim that speech is justified because the speaker has a right to speak. Freedom of speech is defensible insofar as it is an exercise of each individual’s right to express their beliefs. This is the most straightforward understanding of speech, and it is the justification that most people rely on when claiming that it is “their right to speak.”10 Understandably then, a speaker-based justification of speech runs into natural resistance when applied to bots. We don’t grant free speech rights to animals and we certainly don’t grant them to inanimate objects, so why should we give them to pieces of software?

While challenges arise when extending a speaker-based justification of free speech to bots, these difficulties remain only theoretical. In practice, American legal doctrine rarely relies on speaker-based theory. Instead, the decisions of our highest courts on questions of speech tend to apply two other theories of speech. Neither of these two theories hinge upon speakers’ rights to speak. Of the two theories, one justifies free speech by answering the question, “why is free speech good for listeners?” and the other does so by answering the question, “why is censoring speech bad?” Neither of the two most common free speech justifications, as we’ll see, hinge upon who is doing the speaking; both justify speech on other grounds. Because these theories do not care whether Obama or obama_bot is talking — only that the speech itself could be valuable — they are much more supportive of bot speech than the speaker-based justification.

The first important theory undergirding American speech laws answers why free speech is good on the basis of listeners. Free speech is justified not because the speaker has some right to self-expression, but rather because their speech provides benefits to listeners. This listener-based theory contends that free speech is valuable to citizens because it contributes to their ability to make better democratic decisions. John Stuart Mill’s On Liberty lays out the foundations of the “marketplace of ideas” argument for free speech.11 This classic defense of free speech emphasizes the acquisition of truth by means of public discourse: “when there are persons to be found, who form an exception to the apparent unanimity of the world on any subject… truth would lose something by their silence.”12 It is not speakers who lose when they are stifled, but rather listeners who could have obtained knowledge useful for making democratic decisions. In such a marketplace, what is important is that listeners arrive at a more truthful representation of the world through free exchange of speech, no matter how radical. If we rely on Mill and his listener-based justification for free speech, it is not hard to imagine the same theories extending to defend bot speech as well.13 Because what is valuable about speech is the information contained in the speech itself, the bot-hood of the speaker is merely incidental and thus entirely irrelevant.

While listener-based theories justify why free speech is valuable, another popular theory — one that is often interpreted as the historical motivation for the First Amendment — justifies free speech by demonstrating why government censorship of speech is bad. This theory, called negative speech theory, is motivated by a preemptive skepticism of government overstep. Negative speech theorists defend people’s right to free speech due to their mistrust of the government’s ability to balance the costs and benefits of censorship.14 Negative theory is concerned with improper governmental motives when it comes to censorship; it assumes the government’s own interests will inevitably and inappropriately contribute to selective censorship of speech. This justification for free speech can be portrayed as remarkably speaker-agnostic, instead focused on limiting the powers of censors. Ultimately, negative justification seeks to protect the general system of free speech, making no particular statements about whether that speech should come from bots or humans.

New Tensions in Old Theories

If First Amendment protections revolve around listener-based theory that focuses on the value of speech to citizens and negative theory that is concerned with government censors, then there should be little concern whether bots or people are doing the speaking. That bots may be perfectly eligible for First Amendment protections should strike us as not only surprising, but also concerning.

Bot speech has introduced two capabilities unimaginable in the America of the 19th century: one is the ability to amplify speech at near-infinite scale and the other is the ability to assume a multitude of false identities. When communications were expressed in person, it was infeasible for a single actor or organization to drown out the conversation by sheer force of numbers. Anyone trying to juggle hundreds of thousands of false identities was up to an even more impossible task. Now, however, both of these capabilities are made readily accessible through the employment of bots.

The scale of speech made possible by new digital mediums and online technologies underscores an “incompletely theorized agreement”15 regarding the value of speech that has historically been obscured by technical infeasibility. Listener-based justifications for speech implicitly adopted some version of “the more speech, the better.”16 However, the coherence of this heuristic is strained by a new tension between the quality and quantity of speech. A more regulated FCC notice and comment board, perhaps one that relies on ReCAPTCHAs to filter bots, would almost certainly elevate the democratic value of the comments on the page. However, it would sit at odds with the verdict of Mill and his classical liberal allies: if truth — and good governance — are to be found in the exchange of ideas, then limiting bot speech seems to actively hinder this mission. 

Historically, the democratic value of speech has increased as more citizens, particularly those with marginalized views, have received invitations to the marketplace of ideas. However, with the advent of highly automated bot speech that oftentimes assumes false identities, this laissez-faire market has incurred heavy externalities. Do we stick with the listener focused playbook and allow bots to run rampant on federal websites? Or do we stray from a literal understanding of traditional theories in order to preserve what some may perceive as their ultimate intent: the quality and value of the political infosphere?

While bot speech can deteriorate the quality of democratic deliberation, the larger concern regarding bot speech is its ability to drown out the speech of humans: in other words, for speech itself to censor.  Under our attention-scarce speech regime, direct censorship often incurs heavy costs as it may inadvertently draw attention to the censored content. Better to play by the rules of the attention economy and simply flood comments and feeds with a torrent of distractions, relegating any undesirable speech to a de facto digital oblivion without needing the rusty tools of traditional censorship.17 The success of reverse censorship campaigns, as in the FCC net neutrality case, poses an uncomfortable question that threatens the very core of the negative theory of free speech: if overwhelming quantities of speech can be used to censor, how can we justify free speech by citing the dangers of censorship? When speech becomes censorship, negative theory consumes itself by its own logic. Again, emergent capabilities of modern technologies underscore latent tensions in free speech theory. 

Conclusion

When we apply our current speech theories to bots, we are forced to consider the capacity of flooding and misinformation to censor speech and deteriorate the quality of political dialogue. These troubling outcomes underscore an immediate need to update old ideals for a new digital context. While private platforms like Facebook and Twitter have started to establish bot regulation policies, our federal government has yet to truly grapple with the new realities of the digital public sphere. 

Our education, our activism, and our elections are hosted on the internet. We cannot continue to willfully ignore the unique challenges of this new world by retreating into traditional theories for the sake of tradition. In order to maintain the free democracy that Mill imagined, it is up to us, as citizens, to hold our government accountable for updating our ideals of speech. Silence in this, of all topics, would be the worst of all ironies.

Rewired is a digital magazine where technology and society meet. We’re committed to curating stories that amplify diverse perspectives and bridge disciplines.
Sign up to receive updates about upcoming issues and submission openings via email.