Accompany all algorithmic decisions with an explanation of the most important reasons and/or parameter(s) behind the decision and how they can be challenged

On receiving an automated decision, workers should be notified that the decision was made using an algorithm, which algorithm it was, including a link to its description in the public registry, and how to ask for a human review.

Image saying data

A reason for the decision should always be made available to the worker, including by providing data about the inputs, including worker personal data, and parameters that were decisive to the outcome or that could have resulted in a different outcome. Sources of particular parameters and inputs must also be provided and explained – for example in the event that a decision is based on a customer feedback rating. Reasons given for a particular decision must be specific rather than wholly generic and should not be provided in overly technical language. 

Where an algorithmically generated score was used in relation to a decision – companies should provide workers with the overall distribution ratio of the score – i.e. how many workers fall into the low/medium/high risk categories within a given geographic area (for example the city in which the worker is operating). The purpose of this is to provide the context behind decision-making, which would in turn uphold algorithmic accountability and enable workers to challenge inaccurate parameters and inputs. This information could be provided through an aggregated percentage of workers with a certain score or rating. Given that this information is likely to change over time, the ratio should be provided to workers at the time that they face a particular decision, such as termination. Similarly, the company should also provide information that addresses the prevalence and complexity of any parameters that prompted a particular automated output at the time of the decision. For example, if a company flags a worker on suspicion of GPS spoofing and suspends his account as a result – it should provide information on whether other workers within the same geographical location also reported technical issues relating to the app’s collection of location data.

The purpose of this is not just to address information asymmetry and allow decisions to be challenged, but also to allow workers to understand why they are being treated a certain way and what changes they can make to get a better outcome. This doesn’t necessarily mean going into the details of the algorithm, but rather providing insight into what change(s) a worker could make to receive a more desirable outcome in the future.

A worker should be able to challenge any decision they think is wrong or unfair. Contact details of a human must be provided of who to contact for this. As such, workers must be provided with information as regards which teams have what oversight over the algorithm’s outputs and how they can be contacted. This information must also include the job titles and relative seniority of particular human agents involved in the review and oversight of the decision relating to the worker. 
 

 Case Scenario 1

A driver is refused access to their account after taking a photo of themselves following a prompt by the platform. The driver should be provided with the key parameters that led to this decision if appropriate, for example: what existing data the photo was compared against, match percentage, metadata (such as device maker and model).

In this case scenario, they also should be informed that the elements captured and shared with the third-party service in charge of authentication were not sufficient to confirm their identity.

The worker should be given the possibility to immediately contest this decision and have it reviewed by a human or be provided with an alternative way to verify their identity. Should the decision be overturned, appropriate compensation should be offered to make up for the time lost by the worker.

Case Scenario 2

A courier is notified that their account has been de-activated. The notification should provide all the relevant information that led to this decision, for example that their conduct has been flagged and reported by multiple customers following an identified pattern and that a human staff member acted on the report. The courier should be provided with the relevant data, including number of reports, time, date and general reason for this report. 

The notification must also identify any review and oversight team(s) involved in the de-activation determination. This information should also include the relative seniority and job titles of any human agents involved in the decision as well as the length of time taken to review the de-activation decision by each respective team(s) and an explanation of any escalation process between teams (if applicable). 

Within the notification, or easily accessible, should be a mean to contest this decision with adequate inputs allowing the worker to provide additional information.

Case Scenario 3: (poor practice) 

A driver is assigned a ‘medium risk rating’ by a fraud detection algorithm after 4 failed trips. Following a further 3 failed trips, he is elevated to a ‘high risk’ and informed via SMS that he is above the local fraud threshold.