Maintain a public register of the algorithms used to manage workers

The register must include all algorithms that make management decisions that affect rights at work.

Uniform grid of endless rows of desk cubicles with laptops and monitors

The public register is key to addressing the information imbalance of algorithmic management by allowing workers (and candidates) and their representatives to understand what algorithms are being used and how they work. In order to do this, the register must be in accessible non-technical language and kept up to date. It must include a list of all algorithms that affect worker's treatment while at work. For each listed algorithm, the following information must be included:

This is different from the previous sentence. I think we should agree on a term and stick to it to avoid any confusion. We could even define it. e.g: “algorithms that affect workers. We define those as any algorithm that produces, or is part of a larger system that produces, outputs influencing  the conditions under which workers access work, are managed, perform their work, affect their working conditions or that influence their working status”. This could be included in the top para rather than here as it applies for all demands

The purpose and design of the algorithm

A short (two or three sentence) description to explain what purpose(s) the company uses the algorithm for and why it has been preferred to other options.

An overview of the algorithm’s design should also be given including what sort of management decisions are made by the algorithm (and whether they are advisory or decisive); whether the algorithm relies on neural networks, machine learning, probabilistic functions or other type of logic; what training data was used; and under what circumstances the algorithm is not deployed or has a failsafe.

The relative importance of the algorithm’s inputs and parameters

The register must explain, in an accessible and non-technical way, what data and ratings it uses to reach decision. This means providing an easy way to understand how important different inputs and parameters are to different decisions. This could be done in various ways: from a simple rating of ‘high/medium/low importance’, to giving more specific and granular detail of the weighting each input or parameter has. 

As well as explaining how important different parameters and inputs are, it should also explain the source of inputs (are they from the app, from customers, from the web, inferred, how long ago, while at work, from data brokers, etc) and in broad terms how they are calculated. Where inferred parameters (such as a risk score and other profiling parameters) are used to generate a particular algorithmic output, the company should provide a detailed description of the parameter and how it is generated, including the data used to determine its value. Fundamentally, complex inferred parameters should not be used as a mean to obscure the source and weight of data used in algorithms. Where decisions are based on the personal data or behaviour of workers and/or consumers this must be clear, and the source of any personal data used should also be set out. The register should also confirm that the algorithm uses only data that is strictly necessary for the purposes of the algorithm, and does not use any sensitive personal data, emotion recognition, data collected while not at work etc.

It is possible that AI algorithms will use parameters that are hard to give real-world human descriptions for. In such cases, the company must state this and instead thoroughly explain how the tool has been built, and how they monitor and audit its outputs to ensure that they do not result in bias or discrimination. Examples (or statistics) comparing different, but similar, inputs with differing outputs may also be needed to explain which sorts of inputs tend to lead to which sorts of outputs.
 

Human intervention

Where algorithms are used to make decisions in the workplace, there should always be a human either checking, and/or able to review, any decisions. 

The register should specify the responsibilities and roles of those human teams and must specify what level of decision-making authority oversight teams have, as well as the training that decision-makers have received particularly in respect of the design and potential impacts of the algorithm. The register should break down the level of decision-making authority across multiple stages of review and oversight (where  applicable)The register should also provide some operational information about how much staff capacity (in FTE) is dedicated to human review and how long a review is expected to take. 
 

Development history, updates, and impact assessments

The company should also state where responsibilities for the development and updating of the algorithm lie, especially where an external supplier has been involved. This does not require identifying individuals, but rather relevant teams/departments/organisations and the nature of their different responsibilities. A log of updates should also be listed.
The register should also specify what if any consultation has taken place between the company and workers and their representatives with respect to the design or revision of the algorithm.

With regards to how this information is shared, we encourage the use of a variety of means such as flow charts, FAQs, or short form videos to accompany textual explanations. The selected mediums should strive to make understanding how the algorithm operates as accessible and simple as possible to those affected.
 

The below examples (and the subsequent ones accompanying each of the overarching demands) are without prejudice to our position that workers should not be forced to provide access to sensitive data, including for example biometric information, while at work, without due safeguards; and that they should not be subject to decision-making by opaque algorithms that impacts on their working conditions, in particular with respect to decisions concerning suspension and termination. This is why we have called for strong international regulation that safeguards against these practices, including most recently as part of the ILO's proposed new standard on decent platform work.

Case Scenario 1: Worker identification system

Purpose

Ensure the person logged in and using the service to work is the person registered for this account. This system aims to confirm the account is used by the registered worker and not used by a different person. This was the preferred solution to keep costs manageable given the high number and high turnaround of workers using the platform. A third-party service was selected to avoid developing an in-house solution which would have to reach a high standard with regard to potential bias and false positives.

Design

Deterministic system relying on a third-party facial recognition service. This system captures photos of the users’ face and match it against previously stored photos of the account owner.

Parameters and importance:
  • Biometrics data captured in the photo
  • metadata including device used to capture the photo, time, date
  • previously recorded account information such as ID, previously captured photos
Human Intervention:

5 members of staff trained to review appeal by workers.

Teams involved and means of contact:
  • In house Customer Identification team: notified of algorithmic decisions and challenges brough by workers. Can override a decision after investigation. Keeps a record and report bugs and other issues detected to the development team in charge of the system. Minimum training of 2 weeks required to be part of the team.
  • RH
  • Third party FRT development team: maintain and update the system, informed about bugs and issues
  • Data Protection team: Audits algorithm to ensure data collected and processed adheres to Privacy Policy and Data Protection Laws
First deployed

01/03/2022

Last major update

24/07/2023

Impact assessment

Completed on 15/02/2022

Engagement with workers and workers representative

None

Case Scenario 2: Account/contract termination

Purpose

Terminate a workers’ contract following the satisfaction of a number of criteria. The aim of this system is to identify accounts that do not match the standards set by the company and should have their account terminated

Design:

Deterministic system that monitors reviews, reports and other parameters to automatically flag workers who reach a defined threshold

Parameters and importance:
  • Number and quality of reports from clients having interacted with the worker - High
  • Feedback from clients having interacted with the worker - High
  • Number of hours active on the platform - medium
  • Number of jobs performed by the worker - medium
  • Geolocation data - low
Teams involved and means of contact:
  • Engineering team
  • RH team
  • Data Protection team
Human Intervention:

High, a human will always review the decision taken by the algorithm and make the final decision based on both the information provided by the system and its own interpretation of the data that triggered the decision to flag a worker.

Staff allocated:

10 people with specific training

Decision overrun by human review:

10% from 2021 to 2024

First deployed:

25/10/2023

Last major update:

14/11/2023

Impact assessment:

Completed on 01/02/2022

Engagement with workers and workers representative:

Yes. Met with Union 1 between November and December 2021. Circulated survey to workers during November and December 2021.