• CFRED CUHK Law

Cascading Injustice from Algorithmic Risk Assessment

Tim O'Brien - Microsoft and University of Washington


One of today’s most-studied problems in data science and the artificial intelligence (AI) field of machine learning is algorithmic bias, in which systematic errors produce inaccurate and unfair results. In criminal justice, this problem permeates across a set of tools called “risk assessment instruments” that judges, corrections officials, and parole boards use to apply actuarial methods to quantify one’s risk of failing to appear in court, risk of committing a crime while awaiting trial, risk of prison misconduct, and risk of re-offending after re-entry into society. In recent decades, the criminal justice process in the United States has seen increasing use of these tools, while at the same time, researchers have uncovered empirical evidence of racial bias against Black defendants in the results these tools produce to support consequential decisions.


Recognition of this issue became evident in 2014, when US Attorney General Eric Holder warned that use of these tools would “exacerbate unwarranted and unjust disparities that are already far too common in our criminal justice system and in our society”. Two years later, ProPublica published the results of a study showing how a widely-used risk assessment tool was twice as likely to falsely label Black defendants as future criminals than white defendants, and how white defendants were mislabeled as low risk more often than Black defendants. The source of this bias is well-understood: historical patterns in data showing unfair treatment of Black defendants by human actors in the criminal justice system are defined mathematically, expressed as rules in the form of algorithms, and then used to make predictions about the likelihood that a defendant with certain characteristics will belong to a certain group (high risk, medium risk, low risk). As a result of this and other studies, the presence of bias in algorithmic tools used in virtually every step of the criminal justice process has become widely understood among researchers and scholars.


What is not widely understood is how the pervasiveness of these algorithmic tools throughout the criminal justice process produce systematic errors that amass on top of one another, with a cascading effect that compounds injustice for Black defendants and extends the harm far beyond one’s prison sentence. A synthesis of research and scholarly literature on this phenomenon has revealed statistical correlations between biased assessment risk scores and higher likelihood of pretrial detention for Black defendants, leading to the likelihood of longer sentences & higher custody levels while incarcerated, which in turn correlates to higher likelihood of gang inclusion, drug use, and post-release likelihood of homelessness, unemployment, divorce, and reduction in income. These cascading effects are irreversible, as these is no feedback loop for correction of errors. The permanent presence of these effects subsequently feeds into any future interaction with the criminal justice process, which often begins with a post-arrest algorithmic risk assessment, after which the compounding of errors and cascading of harm picks up where it left off.


There is a vigorous debate between abolishment of these tools and regulation of their use. The argument to abolish is premised on the view that a process that is unfair to Black defendants is made only slightly less unfair with algorithmic tools, which pose the additional risk of motivating automation bias (the human tendency to trust computers over our own judgment). The argument in favor of regulation is premised on the view that lack of transparency into how these tools work deprive defendants of due process by excluding risk scores from the adversarial system that underpins U.S. law. Because outright abolishment is unlikely, a continued push for algorithmic transparency is a pragmatic approach to addressing the inherent injustice these tools have been shown to produce. Legislative remedies are in the works in various jurisdictions, but the urgency to act increases as the true impact of risk assessments becomes apparent amidst a national conversation in America about systemic racism and the high-tech tools that enable it.


More broadly, this approach aligns with similar efforts in other domains in which algorithms support consequential decisions, including financial services, healthcare, education, and housing. While there is a long history of prejudice and inequality in America, the current social moment is proving enlightening for millions of Americans and is thus an opportunity to intervene where use of technology in our institutions poses a risk of further harm.


Recent Posts

See All

Copyright © 2018 All Rights Reserved. Faculty of Law, The Chinese University of Hong Kong

The Chinese University of Hong Kong