Podcast: The end of privacy? The spread of facial recognition
This podcast was recorded before the US Federal Trade Commission told a company called 'Ever' to delete any facial recognition algorithms they developed without consent
Facial recognition typically refers to systems which collect and process data about a person’s face. These systems are highly intrusive because they rely on the capture, extraction, storage or sharing of people’s biometric facial data -often in absence of explicit consent or prior notice.
Facial recognition technology (FRT) can be used to identify, authenticate/verify or categorise an individual. For example, facial recognition may be used by individuals to unlock their devices, authorise payments or sign up for services. This process relies on the facial image of a single individual being captured and compared to an existing image that individuals have already provided and verifying that it is them requesting access (this is what’s referred to as ‘one to one’ matching).
In the context of policing, FRT may involve the use of cameras, which can capture individuals’ facial images and process them in real time (Live FRT) or at a later point. The collection of facial images results in the creation of “digital signatures of identified faces”, which are analysed against one or more databases (“Watchlists”), usually containing facial images obtained from other sources to determine if there is a match.
The use of FRT by both police and private actors has a seismic impact on the way our society is monitored or policed. The roll out of such intrusive technology does not only pose significant privacy and data protection questions, but also ethical questions around whether modern democracies should ever permit its use.
For example, the radical introduction of FRT will inevitably result in the normalisation of surveillance across all societal levels and accordingly cast a "chilling effect" on the exercise of fundamental rights, such as our freedom of expression or our right to protest.
PI is deeply concerned that the use of FRT by both private companies and the police raises significant problems for our individual freedoms. The intrusiveness of FRT and the dangers associated with its potential abuse by the police call for robust safeguards and oversight governing its authorisation and use.
We investigate the creeping use of facial recognition across the world and draw attention to its dangers within our reports.
We work with community groups, activists, and others to raise awareness about the technology and what they can do about it, for example in our Neighboorhood Watched campaign
And we also push national and international bodies to listen to peoples' concerns and take steps to protect people's privacy. Last year we raised our concerns around the use of FRT and other technologies before the UN Human Rights Committee. We also submitted evidence to the Scottish Parliament Justice Sub-Committe on Policing, which subsequently said that the roll out of the technology would be 'unjustifiable'.