What we need to see: controls on profiling and decision-making

Body

We would like to see a world in which individuals are not subjected to arbitrary, discriminatory or otherwise unfair decisions that they are unable to challenge, correct or question the grounds and the process. 

We would also like to see a world in which there are no secret profiles of people, that people don’t have to fear that their profiles lead to decisions that limit their rights, freedoms and opportunities. 

Individuals should have full access to their data profile. Subject access requests will disclose personal data including the categories and profiles, including derived and inferred data.

Individuals will be able to know when their experiences are being shaped by their data profile, e.g. from targeted advertising to news to access to services, and be able to object and shape their profiles.

Protections should be generated around group profiling, which is often outside the realm of data protection safeguards on profiling.

What this will mean

Data protection frameworks around the world need to address the risks arising from profiling and automated decision-making, notably, but not limited to, privacy.

People will know when automated decision making is taking place and the conditions to under which they are taking place, and have the right to redress.

Essential reform actions

Frameworks that have already incorporated profiling and automated decision-making need to make sure that the provisions cover all human rights critical instances of automated decision-making and account for the fact that different degrees of automation and human involvement can lead to similarly harmful outcomes.

Loopholes and exemptions in data protection law around profiling must be closed. Not all data protection laws recognise the use of automated processing to derive, infer, predict or evaluate aspects about an individual. Data protection principles need to apply equally to data, insights, and intelligence that is produced. 

In addition to data protection laws, and depending on the context in which we are seeing automated decision-making, additional sectoral regulation and strong ethical frameworks should guide the implementation, application and oversight of automated decision-making systems.

When profiling generates insights or when automation is used to make decisions about individuals, users as well as regulators should be able to determine how a decision has been made, and whether the regular use of these systems violates existing laws, particularly regarding discrimination, privacy, and data protection. 

Public sector uses of automated decision-making have a special responsibility to be independently auditable, testable.

Cases of positive steps

Examples of being told how your profile is generated, e.g. Twitter to some degree and Google Dashboards, advertising platforms that explain ‘why you are seeing this ad’ but these are often misleading and insufficient so we must push for more.

French law[1]that gives a right to an explanation for administrative algorithmic decisions made about individuals. Some areas of government, such as national security and defence, are excluded. 

New York City may soon gain a task force dedicated to monitoring the fairness of algorithms used by municipal agencies.

[1]loi pour une République numérique (Digital Republic Act, Loi n 2016-1321). The new law provides that in the case of [author translation] “a decision taken on the basis of an algorithmic treatment”, the rules that define that treatment and its “principal characteristics” must be communicated upon request. Further details were added by decree in March 2017 (R311-3-1-2) elaborating that the administration shall provide information about: the degree and the mode of contribution of the algorithmic processing to the decision-making; the data processed and its source  the treatment parameters, and where appropriate, their weighting, applied to the situation of the person concerned; the operations carried out by the treatment.