Pa's story: how a facial recognition system potentially failed to recognise a driver of colour and may have cost him his job

This real life testimony of a UK private hire driver who was employed by Uber. It explores the issues that gig economy workers face as a result of algorithmic management and surveillance utilised by their employers.


Please note the views expressed in the video are interviewee's own and do not necessarily reflect the views of PI.

Pa used to work for Uber. After some time, Uber started asking him to submit a picture of himself to the platform to confirm it was indeed him who had completed the job. However, with time, the frequency of the requests increased. In the beginning, the requests for a picture only happened once a week, but as time went by Pa told us that he was being prompted to send pictures more often. He informed us that sometimes this happened three times a day. Pa said that this would occur even while he was driving, and he would be unable to see the information on the platform until the picture was submitted. As a result, he would have to stop and park to take the picture. If the picture was not good enough, it would get rejected and Pa had to play with the lights inside the car to take another picture. He said that he spoke to his colleagues but none of them were asked to provide pictures as frequently as he was. He called Uber but was told everything was normal, so Pa carried on sending pictures.

One night, after completing a shift, Pa went on the app to check his earnings. He found out that his account had been deactivated, without notice or prior communication. According to Pa, he then received an email informing him that his account had been deactivated. It was alleged by Uber that he allowed someone else to use his account. Pa immediately guessed it had something to do with the pictures he had been asked to send. In fact, he said that he knew from his own research that Uber was using a facial recognition technology that struggles to recognise faces of darker skinned people, like Pa.

Despite numerous attempts to reinstate his account, Pa claims that he was unsuccessful. When we interviewed Pa, he expressed his frustration about how his situation was handled by Uber and the negative implications this deactivation has had. He was very frustrated as he felt that he was working for an algorithm, as opposed to a real person. All he wanted was to have a human review of his images as he felt that a real person would be able to identify that the images were indeed of him. However, Uber repeatedly refused to revise their position and did not provide this option to him.

Pa’s fight against Uber and their use of facial recognition is on-going. In addition to this, Pa outlined in his interview with us numerous other issues affecting drivers as a result of algorithmic management. For example, Pa told us that when Uber brings down a driver’s rating, they will not tell the driver why this action was taken. According to Pa, drivers are not informed about the reason why customers may rate them badly, offering them neither a way to improve nor an avenue for redress in case the bad rating was not justified. Pa suspects that a bad rating affects whether you get jobs or not, the type of job you get and how regularly you get them, but he has never been able to get full clarity on this. In fact, he suspects that even the people he spoke to at Uber do not know. “The people you speak to are not the people who designed the software,” he says.

We have reached out to Uber for comments about our interviews. Uber told us that it is always the specialist teams of human reviewers who make final decisions about drivers' accounts. Further, Uber stated that their Real-Time ID Check prompts drivers to take a selfie to confirm that they are the same person who went through all the necessary screenings to drive on their platform. This check was created to tackle the problem of account sharing whereby unlicensed, uninsured drivers fraudulently take private hire trips, in breach of licensing requirements and with clear risks to public safety. Uber stated that the platform does not surface this check if the drivers’ speed data suggests they are driving.

According to Uber, if a driver fails facial recognition check, the images are sent for secondary review by three human reviewers. Certain actions, such as restricting the driver’s further access to the platform, can only be pursued if the majority of reviewers conclude that the photos did not match. Uber also stated that they are aware that the facial verification technology has historically worked worse for people with darker skin complexions. However, according to Uber, they conducted internal fairness testing which found no evidence that the technology is flagging people with darker skin complexions more often. You can read Uber's full response here.

This research is a result of a collaboration between Privacy International, Worker Info Exchange, and App Drivers and Couriers Union.