Accompany all algorithmic decisions with an explanation of the most important reasons and/or parameter(s) behind the decision and how they can be challenged
Any company using algorithms should accompany all management decisions with a statement of how they were made (for example, ‘fully-automated’, ‘algorithm-supported’ or ‘no algorithmic involvement’). When a human has been involved in a decision, it must state who has done what, when they did it, and on what information their decision was based. Where decisions have relied on algorithms, workers should be notified of which algorithm it was, including a link to its description in the public registry, and how to ask for a human review.

A reason for the decision should always be made available to the worker, including with reference to the inputs, including worker personal data, and parameters that were decisive to the outcome or that, if changed, would have resulted in a different outcome. Sources of particular parameters and inputs must also be provided and explained – for example in the event that a decision is based on a customer feedback rating. Reasons given for a particular decision must be specific and personalised rather than wholly generic and should not be provided in overly technical language (for example, stating that ‘on this date you were expected to make X deliveries, but you only made Y’, rather than ‘your deliveries are slower than expected’).
Where an algorithmically generated score was used in relation to a decision – companies should provide workers with the overall distribution ratio of the score – i.e. how many workers fall into the low/medium/high risk categories within a given geographic area (for example the city in which the worker is operating). The purpose of this is to provide the context behind decision-making, which would in turn uphold algorithmic accountability and enable workers to challenge inaccurate parameters and inputs. This information could be provided through an aggregated percentage of workers with a certain score or rating. Given that this information is likely to change over time, the ratio should be provided to workers at the time that they face a particular decision, such as termination. Similarly, the company should also provide information that addresses the prevalence and complexity of any parameters that prompted a particular automated output at the time of the decision. For example, if a company flags a worker on suspicion of GPS spoofing and suspends his account as a result – it should provide information on whether other workers within the same geographical location also reported technical issues relating to the app’s collection of location data.
The purpose of this is not just to address information asymmetry and allow decisions to be challenged, but also to allow workers to understand why they are being treated a certain way and what changes they can make to get a better outcome. This doesn’t necessarily mean going into the details of the algorithm, but rather providing insight into what change(s) a worker could make to receive a more desirable outcome in the future.
A worker should be able to challenge any decision they think is wrong or unfair. Contact details of a human must be provided for this, as well as information on how to request a review and which teams have what oversight over the algorithm’s outputs.
Case Scenario 1
A driver is refused access to their account after taking a photo of themselves following a prompt by the platform. The driver should be provided with the key parameters that led to this decision if appropriate, for example: what existing data the photo was compared against, match percentage, metadata (such as device maker and model).
In this case scenario, they also should be informed about what match percentage is required to provide authorisation, and their historical match percentage rate. The average match percentage rate of access attempts approved by the algorithm could also be provided.
The worker should be given the possibility to immediately contest this decision and have it reviewed by a human or be provided with an alternative way to verify their identity.
Case Scenario 2
A courier is notified that their account has been de-activated. The notification should provide all the relevant information that led to this decision, for example that their conduct has been flagged and reported by multiple customers following an identified pattern and that a human staff member acted on the report. The courier should be provided with the relevant data, including number of reports, time, date, general reason for this report, their risk score (where applicable) and the percentage of other workers falling in the same risk category (where applicable).
The notification must also identify any review and oversight team(s) involved in the de-activation determination. This information should also include the relative seniority and job titles of any human agents involved in the decision as well as the length of time taken to review the de-activation decision by each respective team(s) and an explanation of any escalation process between teams (if applicable).
Within the notification, or easily accessible, should be a mean to contest this decision with adequate inputs allowing the worker to provide additional information.
Case Scenario 3: (poor practice)
A driver is assigned a ‘medium risk rating’ by a fraud detection algorithm after 4 failed trips. Following a further 3 failed trips, he is elevated to a ‘high risk’ and informed via SMS that he is above the local fraud threshold. The notification includes the percentage of workers also falling in that category.