😱 This UK Government-Funded AI Programme Wants to Make ‘Face Recognition Ubiquitous’. (But Sure, We're Probably Being Paranoid About Face Surveillance).

Universities in the UK and China, the Met Police, and surveillance companies are working on a government-funded programme developing "unconstrained face recognition technology".

Key findings
  • Aiming to fight “crime and terrorism through automatic surveillance”, it uses machine learning to capture faces "even without subjects' being aware of being monitored"
  • The Met Police have recently rolled out facial recognition technology, promising to use it in a "targeted" way while insisting that "we want the public to know that we are there"
  • Surveillance companies have also shared a "proprietary database" with the researchers, while "face images of Chinese ethnicity" have also been collected.
Long Read
facial Recognition research

The UK’s Metropolitan Police have began formally deploying Live Facial Recognition technology across London, claiming that it will only be used to identify serious criminals on “bespoke ‘watch lists’” and on “small, targeted” areas. 

Yet, at the same time, the UK’s largest police force is also listed as a collaborator in a UK government-funded research programme explicitly intended to "develop unconstrained face recognition technology", aimed “at making face recognition ubiquitous by 2020". 

The £6.1m programme, which also includes the Home Office, various biometrics companies, and a University in China – home to some of the most pervasive face surveillance in the world – shows how governments are investing in facial recognition technology designed for mass surveillance. 

If the creeping use of facial recognition by forces in the UK is normalised and left unchallenged, it will only be a matter of time before such "unconstrained" technology will be rolled out on the streets. 

The research programme, Face Matching for Automatic Identity Retrieval, Recognition, Verification and Management (FACER2VM), aims to use machine learning to overcome current limitations in capturing face images, such as changing expressions, illumination, or blurring. 

With “people mobility within the country and across borders reaching unprecedented levels”, its researchers argue, “recognising and verifying individuals automatically, based on biometrics, is emerging as an essential requirement”. 

Aiming to fight “crime and terrorism through automatic surveillance”, the project insists that “face biometrics is a preferred biometric modality, as it can be captured unobtrusively, even without subjects' being aware of being monitored and potentially recognised”.

The research is steered by a “team of external experts representing the biometrics industry, government agencies, and potential users of the unconstrained face recognition technology.” Together with the Police and Home Office, other collaborators and projects partners include the surveillance companies Digital Barriers, Cognitec, and IBM– as well as the BBC. 

UK-headquartered Digital Barriers, which claims to have customers in over 60 countries, have shared a "propriety database" with the project. A spokesperson at the BBC's research arm confirmed to Privacy International that they withdrew almost immediately after its inception. The Met Police have so far not responded to provide any comment.

The programme is due to finish in 2021 and is led by the University of Surrey. 

‘Face images of Chinese ethnicity has been collected’

This programme also collaborates with Jiangnan University in China, with which researchers are “sharing data and resources” and have “actively contributed” to the creation of the International Joint Laboratory for Pattern Recognition and Computational Intelligence, which is used to facilitate collaborative research between Universities. 

The programme’s outputs also state that in order to “develop facial biometric technology for unconstrained scenarios based on machine learning and prior knowledge… a dataset of 3D face images of Chinese ethnicity has been collected.”

The biometrics industry relies on such test data to develop capabilities. If this data is not sufficiently diverse, its outputs will reflect this: for example, if the data is overwhelmingly comprised of white people, the algorithms will be better suited to identifying white people. And this is what’s currently happening.

Conversely, there will be a lot more misidentification errors when it comes to minority groups which will then be more likely to be overpoliced by being wrongly stopped and questioned. For example, past facial recognition trials in London resulted in an error rate greater than 95 per cent, leading even to a 14-year-old black schoolboy being “fingerprinted after being misidentified

Chinese researchers have reportedly tried to overcome a lack of diversity in their data pool by offering mass facial recognition systems to African countries in order to gain test data capable of train racial biases out of software. 

Ubiquitous

The use of live facial recognition by businesses and government agencies is spreading around the world, most notoriously in China. As Privacy International argued in our submission to the Scottish Parliament’s Justice Sub-Committee, it has offered law enforcement novel forms of surveillance, often deployed in an arbitrary or unlawful fashion and without transparency and proper justification while failing to satisfy both international and European human rights law standards.

While police forces in the UK have already deployed the technology multiple times at protests and during large events, the recent announcement by the Metropolitan Police was important because it clearly signalled not only their belief in its effectiveness but also their clear intention to begin operational use. 

Acknowledging the need for ‘the right safeguards and transparency in place to ensure that we protect people’s privacy and human rights’, the force stressed that ‘We want the public to know that we are there’, through publicising details of deployments and presenting ‘clear signage’.

Yet, the programme’s explicit aim is to recognize people ‘even without subjects' being aware of being monitored’; research outputs from the programme include papers on detecting ‘spoofing’ (someone trying to pass off as someone else), deblurring images, – and even recognising individuals’ ears

Even if the Met’s assurances have been enough to persuade some, it is naive to believe that if it is normalised and accepted, authorities and cheerleaders for ever more surveillance won’t be tempted to push for more pervasive monitoring – especially if the technology already exists.

 

Photo by arvin keynes on Unsplash.