Here’s How to Build Trust in Algorithms

Designed by Emily Zhong

The interdisciplinary proposal of explaining their decisions has galvanized philosophers, technologists, and policymakers alike

“Someone must have been telling lies about Josef K., he knew he had done nothing wrong but, one morning, he was arrested.” With those haunting words, the ordeal of Franz Kafka’s protagonist in The Trial begins. K. is ushered through an opaque criminal justice system: he is prosecuted for undisclosed charges, his queries run up against a brick wall, and he is eventually sentenced to death, never receiving an explanation of the crime he had committed.

Replace Kafka’s fictional character with a Wisconsin native, Eric Loomis, and the Kafkaesque system with an algorithm, and one roughly arrives at the problem of algorithmic opacity. In 2013, Loomis was sentenced to prison, the duration largely based upon the outputs of an algorithm. He had pleaded guilty to knowingly fleeing a police officer, and at the time of sentencing, the judge consulted an algorithm. The algorithm predicted Loomis was at “high risk” to recidivate. Accordingly, the judge pronounced a harsh six-year punishment. Loomis, however, sought clarity. He challenged the decision, seeking to understand the basis for the algorithm’s determination. His petition was denied, and, like Josef K., he was convicted sans explanation.

Cases like Loomis’s are becoming increasingly common. Every day, algorithms make decisions that affect millions of Americans on an institutional scale. They calculate credit scores, make hiring decisions, and predict recidivism in criminal courts. Yet, these systems are effectively black boxes. Many citizens, like Loomis, are in the dark about the basis on which decisions — often highly consequential ones — are made. Society is simply expected to acquiesce to them. 

Such opacity is detrimental to our institutions. First, it erodes citizens’ trust in them. Trust has long been considered a cornerstone of a democratic society; political theorists as far back as Edmund Burke in the 18th century have recognized the necessity of trust in creating a healthy political culture.1 A world where opaque algorithms are institutionalized chips away at this culture, and at cherished democratic values. Second, black-box algorithms may encode systemic bias. They may, for instance, predict radically different outcomes for differing groups, given the same information — like how some criminal risk-scoring algorithms discriminate against people of color.2 Embedded in institutions across America, biased algorithms could replicate their prejudices at enormous scale.

One solution to the problem of opacity has recently gained popularity amongst philosophers, computer scientists, and policymakers alike. The core idea is to explain the grounds on which algorithms make their decisions. The thought is that explaining algorithmic decision-making will increase the transparency of these systems and thereby reinforce trust in institutions, simultaneously providing the opportunity to address bias. 

Yet, this solution raises several important questions. What constitutes a satisfactory explanation?  And what must it convey? In short, what is the nature of a good explanation?

***

Philosophers of science, from Karl Popper to Carl Hempel, have thought deeply about the nature of good explanations, and contemporary thinkers have broached this subject in the algorithmic context. The American philosopher Kate Vredenburgh has distinguished one class of explanations applicable to algorithms as “rationale explanations.” This term has been crisply defined by the British Information Commissioner’s Office as: “the reasons that led to a decision, delivered in an accessible and non-technical way.”3 Rationale explanations are best understood as causal explanations. They outline the basis for a judgement, delineating a step-by-step logic connecting the particularities of the case at hand to the outcome. 

Rationale explanations are gaining traction. Facebook and Google both offer versions of these explanations in their “Why am I seeing this ad?” dropdowns. While researching this article, I clicked on a sponsored post by The New Yorker in my Facebook News Feed. I wanted to gauge the quality of Facebook’s explanation and was promptly presented a list of reasons for why I had seen that ad. One bullet read “The New Yorker is trying to reach people, ages 18 and older.” Another informed me that “The New Yorker is attempting to reach people whose primary location is the United States.” (I had specified my birthday and city of residence in my profile, so Facebook connected the dots). Those bullets provided a rationale demonstrating I had qualified as a member of the target audience.

How straightforward it is to provide a rationale explanation depends upon an algorithm’s complexity. Rule-based systems, referred to as “expert systems” by the AI Now Institute, are relatively amenable to such standards. In their case, all that is needed is to follow the sequence of “rules” that fired and produced the relevant outcome. Each rule can be treated as a reason for the decision at hand.4 For instance, that I was born in 1997 and that I live in San Francisco entail I would be likely to read The New Yorker

But, while rule-based systems explicitly encode decision-making criteria, other classes of algorithms do not. Some statistical systems — neural networks in particular — instead “learn” the relevant criteria from reams of appropriately organized data. In their case, concepts such as “age” or “interest” do not arise from direct specification from the programmer. Instead, the algorithm develops an understanding of them (however primitive) over time, and loosely represents them in a complex internal architecture characterized by millions of parameters. This makes decisions produced by neural networks especially difficult to explain. It is not as if, as the philosopher Kathleen Creel points out, simply examining some line of code will provide us insight into why a certain decision was made. 

Recent technical advances in explainable AI are making neural networks more interpretable, however. The Defense Advanced Research Projects Agency (DARPA’s) “XAI” project, for example, aims to build an “explanation interface” that translates otherwise unintuitive parameter values of neural networks into something more comprehensible.5 Google, too, has begun developing new tools to make complex architectures more transparent.6 If successful, these projects will enable rationale explanations for statistical systems too.

Rationale explanations, however, are only one class of explanation pertaining to algorithms, and Vredenburgh argues they aren’t enough. The problem is that rationale explanations are merely descriptive; they do no more than lay out the reasons behind a certain decision. A good explanation doesn’t just need to provide those reasons, she argues, but also assert why those reasons are the right criteria for the judgement. In other words, there needs to be an aspect of “normativity” to a satisfactory explanation. 

The point is best illustrated by example. Take the case of Loomis and the criminal justice system. A rationale explanation for why Loomis was categorized as a high-risk individual might reference his criminal record. But such a description, by itself, is not a justification. On Vredenburgh’s account, the assessment must also justify the reasons; it must claim something like “A past criminal record is a suitable indicator for recidivism, and given Loomis’s record, the algorithm determined he was a high-risk individual.”8 

Philosopher Kate Vredenburgh

Normative explanations are important because they force us to question the criteria for a decision. They also impel institutions to reckon with what’s right. In doing so, such explanations can lead to correctives. Imagine a case in which Loomis were designated a high-risk individual because he was black. Race is an unsuitable criterion for such a judgement and those deploying the algorithm — programmers, operators, even judges — would be driven to correct the decision-making criteria. In this way, normative explanations cultivate the values of trust and equity that matter in socio-political contexts far better than rationale explanations.

***

Algorithmic explainability matters for reasons broader than philosophical or technical ones. For policy makers, explainability is becoming an imperative. In the European Union, the General Data Protection Regulation (GDPR), a comprehensive data and privacy legislation, enshrines explainability into law in certain contexts. Consumers are awarded the right to demand an explanation for decisions made by automated systems. These explainability provisions also apply in the UK under the UK GDPR.

Across the proverbial pond, the debate about explainability is not as advanced, nor as broad. Indeed, much of it has focused on other policy implications of AI. The 2018 California Consumer Privacy Act (CCPA), which has been considered to be modeled upon the GDPR, for instance, largely focuses on data privacy and security that have partly found the limelight due to data-hungry AI algorithms. Other laws, like Illinois’s Biometric Information Privacy Act, focus on transparency in biometric data collection and handling. None of these state regulations mandate algorithmic explanation, and the federal government is further behind still.

Regulators have instead intensely focused on the related, but subtly different issue of algorithmic bias. By far, the greatest target of their scrutiny has been facial recognition. Several cities, including San Francisco and Oakland, have banned the use of facial recognition systems by the police and other governmental agencies. These bans are largely driven by research demonstrating such algorithms discriminate against people of color. A 2018 study by the American Civil Liberties Union (ACLU), for example, found that Amazon’s algorithm falsely identified 28 members of Congress as criminals.7 Many were people of color — including six members of the Congressional Black Caucus, among them the late civil rights leader Rep. John Lewis. Legislation like the Algorithmic Accountability Act of 2019, introduced into the Senate by Senators Ron Wyden (D-OR) and Cory Booker (D-NJ) would mandate fairness checks for automated systems. These developments leave explainability out of the picture. 

There are signs that the debate about explainability is starting to sharpen in the U.S., however. The National Institute of Standards and Technology (NIST), a non-regulatory federal agency, recently published a set of principles concerning explaining algorithmic decision-making.9 Its draft document was meant to spark a conversation and outlines several different kinds of meaningful explanations depending upon the situation. In January 2021, the agency organized a workshop to discuss those principles.10 NIST’s initiative, combined with DARPA’s work in XAI, signal that explainability might soon be at the fore of American technology policy. 

But simultaneously, these developments suggest a departure from the UK and EU’s position. While Europe has legally provided the right to explanation, the American approach doesn’t appear to view explanation as a fundamental data right. The latter position, in contrast, might be best described as valuing explainability, but wanting to avoid any detriment to innovation. That being said, technology companies like Google and Facebook have voluntarily provided some explanations of their algorithms’ decision-making. Facebook’s example was already described; the company has even broadened its tool to proffer explanations for general News Feed posts (“Why am I seeing this post?”) instead of just advertisements.11

The tension between demanding explanations for decision-making and promoting technological innovation must be carefully navigated. A wrong move could result in enormous repercussions either way. But it’s exactly this kind of important, interdisciplinary problem that should galvanize philosophers, technologists, and policy makers to collaborate to find solutions. That, and the wariness of building a Kafkaesque world.

Rewired is a digital magazine where technology and society meet. We’re committed to curating stories that amplify diverse perspectives and bridge disciplines.
Sign up to receive updates about upcoming issues and submission openings via email.