Facial recognition systems display inherent bias

Examples
Date

By 2016, numerous examples had surfaced of bias in facial recognition systems that meant they failed to recognise non-white faces, labelled non-white people as "gorillas", "animals", or "apes" (Google, Flickr), told Asian users their eyes were closed when taking photographs (Nikon), or tracked white faces but couldn't see black ones (HP). The consequences are endemic unfairness and a system that demoralises those who don't fit the "standard". Some possible remedies include ensuring diversity in the face databases used to train these systems and submitting algorithms for outside testing (for example, by the National Institute of Standards and Technology). Privacy advocates may prefer that these systems remain inaccurate, but fairness is still essential.

https://motherboard.vice.com/en_us/article/the-inherent-bias-of-facial-recognition

Writer: Rose Eveleth
Publication: Motherboard
 

Related learning resources