Study finds algorithm no better than random people at predicting recidivism

In a study of COMPAS, an algorithmic tool used in the US criminal justice system , Dartmouth College researchers Julia Dressel and Hany Farid found that the algorithm did no better than volunteers recruited via a crowdsourcing site. COMPAS, a proprietary risk assessment algorithm developed by Equivant (formerly Northpointe), considers answers to a 137-item questionnaire in order to provide predictions that are used in making decisions about sentencing and probation. In a case brought by a defendant, who claimed that the use of an algorithm whose inner workings were a proprietary secret, violated due process. 

Prior criticisms such as a 2017 study by ProPublica have led to debates about how to measure fairness. However, none tested the crucial claim, that the algorithm provided more accurate predictions than humans would. To test this, Dressel and Farid asked 400 non-expert volunteers to guess if a defendant would commit another crime within two years given short descriptions of defendants from ProPublica's investigation in which seven pieces of information were highlighted. On average, the group got the right answer 63% of the time - and 67% of the time if their answers were pooled. COMPAS's accuracy is 65%. Because Equivant does not disclose its algorithm for study, the researchers went on to build their own, making it as simple as possible; it showed an accuracy of 67%, even using only two pieces of information, the defendant's age and number of previous convictions. Other researchers have found similar results. 

Farid and Dressel argue that the point is not that these algorithms should not be used, but that they should be understood and required to prove that they work before they are put into use determining the course of people's lives.
tags: criminal justice, COMPAS, algorithms, research, scientific testing, Dartmouth, error rates, prediction
writer: Ed Yong
Publication: The Atlantic