Encoded Bias: How Pretrial Risk Assessment Algorithms Can Entrench Prejudice

Designed by Rachael Quan

Pretrial Risk Assessment Algorithms can digitize bias within the criminal justice system, only exacerbating racial and socioeconomic inequities.

Content warning: This introduction of this article mentions suicide.

In 2010, a young Black teenager named Kalief Browder was falsely accused of stealing a backpack with no physical evidence to support the accusation. He was sent to Rikers Island with bail posted at $10,000. Browder, unable to pay his bail and unwilling to plead guilty, spent three years in pretrial detention. After three years, the charges were dropped. However, these years were mentally taxing on Browder. Aside from losing crucial years of adolescent development, he also faced brutal treatment and abuse at the facility and was denied the mental health assistance he needed. After attempting to take his own life on multiple occasions over the years, he tragically died by suicide in 2015. 

Browder’s story shows just how impactful pretrial detention can be on a person’s life—and why we must exercise the highest level of discernment when considering whether to detain defendants pretrial, and how to arrive at that conclusion.

Browder’s experience with pretrial detention centered around the cash bail system. 1 Simply put, defendants can only leave pretrial detention if they have enough funds to post bail. Today, the criminal justice system is starting to phase out cash bail and instead use algorithms to determine whether a person should be detained or released pretrial. Many states have opted to build their own algorithms, but there are also leading nationwide assessments, such as COMPAS (Correctional Offender Management Profiling for Alternative Sanctions).

But what do these algorithms entail? Pretrial Risk Assessments (PTRA) determine the likelihood of recidivism, or the chance that a person will re-offend. One such example analyzes an individual’s age, past actions of misconduct, neighborhood, and more to come up with three different risk scores: risk that the individual will be committed for any kind of crime, be convicted for a violent crime, and be absent from court. This means that a person with a prior criminal record would be rated higher risk than a first-time offender. 

The algorithm then converts these risk scores into a release condition recommendation. A higher score indicates the necessity for stricter release conditions and vice-versa. The judge can dismiss the release condition recommendation if they deem the score to be unfit for the individual. While the algorithms are pitched as purely objective and free from human bias, the reality is not as rose-tinted.   

Algorithmic bias stems from a variety of factors. Primarily, the accuracy of an algorithm is based on its training data. Flawed datasets produce flawed algorithms. America’s sinister history of profiling minority communities, preventative policing, and mass incarceration reflect in the data—and increase the chance that a person of color is detained pretrial. Inadequate racial representation decreases the nuance in risk calculations. For example, training sets for facial recognition technology predominantly feature White and male individuals, a skew that has led error rates to be 34% higher for darker-skinned females than lighter-skinned males. 2  

Additionally, recidivism “risk factors” identified by the algorithm include demographic, socioeconomic, family, and neighborhood variables, which, based on structural issues, also work against racial minorities. 3 For example, minorities have historically been subject to gentrification and discriminatory housing policies that ultimately impact their evaluation by risk assessment algorithms. Furthermore, having prior convictions perpetuates a dangerous cycle. Beyond adding an additional risk factor, criminal records affect employment prospects and, thus, income and housing, all of which feed into the other variables that make one more likely to be detained. 

Another area of concern is that of judicial discretion. In many implementations of risk assessment algorithms, judges are given the ability to overrule the scoring—combining human prejudice and AI bias to multiply the effect on racial minorities. In Kentucky, expanded judge discretion eliminated the marginal increase in pretrial release seen previously. A 2019 study found that use of this discretion was more commonly used to set cash bail for Black defendants than for White defendants. 4

These concerns are not just theoretical; practical implementation of these algorithms has yielded troubling results. In Lucas County, Ohio, which started using risk assessment algorithms in 2015, pretrial detention rates increased. Fresno County Jail adopted the Virginia Pretrial Risk Assessment and saw that the pretrial jail population increased from 1600 in January 2012 to approximately 2000 two years later. 5 

Aside from an increase in mass incarceration, the overall accuracy of risk assessment predictions is called into question. In Cook County, Illinois, fewer than 1% of people released pretrial were rearrested for a violent crime despite being rated as a high risk of violence by the Assessment. 6

As nearly every sector of our society digitizes, it is worth examining whether algorithms have a place in the criminal justice system. I argue that there should not be a blanket ban on such technology, but instead a step back into the development process to increase transparency and eradicate sources of bias. The objective of removing human judgment from the decision-making process is valid, but the implementation of this goal needs work, as well-developed algorithms can be used successfully. A recent study of New York City found that algorithms could “far outperform judges’ track record”; 42 percent of detainees could be successfully released without increases in failure to appear in court. 7

Algorithms have power, and it is up to us to use that power for good. That begins with increased oversight over their development and use. Developers must seek solutions to eliminate bias and skew in training datasets, coupled with regular evaluations on accuracy. Currently, many PTRA are “black boxes”; information regarding internal software components—such as the weightage of different risk variables—are protected as proprietary trade secrets. This lack of transparency leaves room for concerns about the root of algorithmic decision-making, and thus it is imperative that development is subject to public accountability.

Another solution may solve a related issue: a lack of diversity in the tech field. A Columbia University study found that AI bias is not just dependent on datasets, but also on the diversity of data science teams. While individuals of different genders and races tended to be equally biased, homogeneous teams compounded biases. 8 This contrast shows the necessity of increasing access to the tech field among underrepresented minorities; there are direct impacts on larger systems in our society. Beyond hiring diverse programming teams, however, it is also important to seek insight from a broad range of stakeholders—from activists who have lived experiences with incarceration to professionals in the criminal justice system.
Mass incarceration is undoubtedly one of the biggest domestic issues America faces—its impacts bleed over into socioeconomic disparities, public health crises, and more. Kalief Browder’s story is just one of too many. Even a single week in pretrial detention can mean job loss, eviction, loss of child custody, and exacerbation of medical issues. And, the implications of risk assessments are not solely limited to a single instance of pretrial detention; those who are detained are 1.3 times more likely to recidivate. 9 The simple fact that current algorithms exacerbate inequities in our criminal justice system should be enough to make us reconsider their use.


Rewired is a digital magazine where technology and society meet. We’re committed to curating stories that amplify diverse perspectives and bridge disciplines.
Sign up to receive updates about upcoming issues and submission openings via email.