Why The US Justice System Shouldn’t Rely On Algorithms
Source: Care2.com, Steve Williams
Photo: Thinkstock
The US justice system widely relies on the COMPAS algorithm to predict the likelihood of an offender committing a repeat crime, but new research suggests that not only is this algorithm only about as reliable as an untrained human, it also tends to favor white prisoners for parole.
Researchers Harry Farid and Julia Dressel of Dartmouth College carried out the analysis that led to this revelation. To examine the COMPAS algorithm’s worth, they recruited untrained workers through Amazon’s online marketplace, Mechanical Turk. The question was relatively simple: How would the algorithm and its 137 data point-crunching fare against untrained online decision makers?
Using a database of 7,000 pre-trial defendants — which also included information like age, sex and criminal record — the researchers gave the online recruits a short summary of a defendant. They then asked the participants if the defendant was likely to reoffend.
Somewhat surprisingly, the untrained humans were right 67 percent of the time, compared to the 65 percent accuracy of COMPAS — virtually the same. That said, it’s worth keeping in mind that the human decision makers had only seven variables to work, while the algorithm had 137.
A second analysis suggested that, despite the algorithm’s complexity, a simple calculation using only an offender’s age and their number of prior convictions would give the same accuracy as the COMPAS scores. That’s particularly troubling if judges are using this scoring system in good faith with the notion that the algorithm is crunching complex data.
“There was essentially no difference between people responding to an online survey for a buck and this commercial software being used in the courts,” Farid explained. “If this software is only as accurate as untrained people responding to an online survey, I think the courts should consider that when trying to decide how much weight to put on them in making decisions.”
And this isn’t the first time COMPAS has been flagged as a concern.
In an analysis published in 2016, researcher Jeff Larson and his team found that while the algorithm predicted the likelihood of re-offending about evenly among black and white defendants — 59 and 63 percent, respectively — it made errors that appeared to favor white defendants. For example, as the researchers write in an explanation of their analysis, “Our analysis found that black defendants who did not recidivate (reoffend) over a two-year period were nearly twice as likely to be misclassified as higher risk compared to their white counterparts (45 percent vs. 23 percent).”
The researchers also noted that even when controlling for prior crimes and other characteristics that might sway the results, black defendants were 45 percent more likely to get higher risk scores when compared to their white counterparts. This pattern held true when considering violent crime and the likelihood of reoffending. And that suggests that not only was COMPAS perpetuating racial bias, but it was also likely allowing violent white offenders back into the population when they may reoffend.
In this latest analysis, that bias was also evident — even though the algorithm was working exactly as intended. That’s likely due to a problem with the data itself — not in the actual deployment of the algorithm. The fact that black people are notoriously overrepresented in the prison population comes down to complex factors that hinge on racial bias, but the algorithm cannot account for this. Therefore, COMPAS may see those prior convictions and score accordingly, potentially perpetuating that same bias.
Regarding the latest research, Equivant – the company that developed the algorithm – stated that though it takes issue with some parts of the study, this actually validates some of their findings involving this technology: “Instead of being a criticism of the COMPAS assessment, it actually adds to a growing number of independent studies that have confirmed that COMPAS achieves good predictability and matches.”
The company has a point here, given that the algorithm is doing precisely what it was designed to do. COMPAS provides a means to check if human judgement is accurate or if there is potentially a mismatch between the data and human opinion of that data. It is, as proponents argue, meant to be used as a reference point – not a deciding factor in sentencing or evaluating parole.
Proponents also point out that these algorithms have worth in other ways. As the Guardian notes, they are useful for highlighting defendants who may benefit from drug rehabilitation programs, as well as many other characteristics and opportunities that could be missed by human analysis alone. Again, though, the algorithm is meant as a tool, not a decision maker.
And that’s where the main issue seems to arise. Algorithms certainly have a place in making important decisions because, in the best case scenario, they can provide for impartial judgement of data. However, our increasing reliance on COMPAS and other algorithms is troubling when their predictive power is not guaranteed. And, as this research shows, this technology may be perceived as more insightful than it actually is.
When such decisions are small and relatively trivial, that might not be a big deal. But release from prison – not to mention someones’ future — is at stake, that’s another story.
https://www.care2.com/causes/why-the-us-justice-system-shouldnt-rely-on-algorithms.html