We may challenge consequential decisions

Individuals should be able to know about, understand, question and challenge consequential decisions that are made about them and their environment. This means that controllers too should have an insight into and control over this processing. 

What we would like to see

We would like to see a world in which individuals are not subjected to arbitrary, discriminatory or otherwise unfair decisions that they are unable to challenge, correct or question the grounds and the process. 

We would also like to see a world in which there are no secret profiles of people, that people don’t have to fear that their profiles lead to decisions that limit their rights, freedoms and opportunities. 

Individuals will be able to know when their experiences are being shaped by their data profile, e.g. from targeted advertising to news to access to services, and be able to object and shape their profiles.

Protections should be generated around group profiling, which is often outside the realm of data protection safeguards on profiling.

What this will mean

Data protection frameworks around the world need to address the risks arising from profiling and automated decision-making, notably, but not limited to, privacy.

People will know when automated decision making is taking place and the conditions to under which they are taking place, and have the right to redress.

Essential reform actions

Frameworks that have already incorporated profiling and automated decision-making need to make sure that the provisions cover all human rights critical instances of automated decision-making and account for the fact that different degrees of automation and human involvement can lead to similarly harmful outcomes.

Loopholes and exemptions in data protection law around profiling must be closed. Not all data protection laws recognise the use of automated processing to derive, infer, predict or evaluate aspects about an individual. Data protection principles need to apply equally to data, insights, and intelligence that is produced. 

In addition to data protection laws, and depending on the context in which we are seeing automated decision-making, additional sectoral regulation and strong ethical frameworks should guide the implementation, application and oversight of automated decision-making systems.

When profiling generates insights or when automation is used to make decisions about individuals, users as well as regulators should be able to determine how a decision has been made, and whether the regular use of these systems violates existing laws, particularly regarding discrimination, privacy, and data protection. 

Public sector uses of automated decision-making have a special responsibility to be independently auditable, testable.