UK police forces expand testing of automated facial recognition despite high levels of inaccuracy

Examples
Date
Location

In 2018 a report from the Royal United Services Institute found that UK police were testing automated facial recognition, crime location prediction, and decision-making systems but offering little transparency in evaluating them. An automated facial recognition system trialled by the South Wales Police incorrectly identified 2,279 of 2,470 potential matches. In London, where the Metropolitan Police used facial recognition systems at the Notting Hill Carnival, in 2017 the system was wrong 98% of the time. Although the Met dropped the system for 2018, both it and other forces such as Leicestershire Police continue to conduct other trials, and SWP has tweeted about arrests it has made using the technology. Kent Police uses the algorithmic system PredPol to predict where crimes may take place; Norfolk Police are trialling a system that analyses burglary data with a view to advising officers whether to investigate further; and Durham police altered its Harm Assessment Risk Tool (HART) decision-making algorithm to test whether it was biased against poor people.

The RUSI report concludes that machine learning systems should always be overseen by humans in order to ensure that they are accurate and unbiased and that irregularities are addressed as soon as they arise.

https://www.wired.co.uk/article/police-artificial-intelligence-rusi-report

Writer: Matt Burgess
Publication: Wired UK