We should know all our data and profiles

Individuals need to have full insight into their profiles. This includes full access to derived, inferred and predicted data about them.

The data that is observed, derived or predicted from our behaviour is increasingly used to automatically rank, score, and evaluate people. These derived or inferred data are increasingly used to make consequential decisions through ever more advanced processing techniques. In the future people will be scored in all aspects of their lives, societies will be managed invisibly, and human behaviour will be under the control of the few and the powerful.

What is the problem

Profiling makes it possible for highly sensitive details to be inferred or predicted from seemingly uninteresting data, producing derived, inferred or predicted data about people. As a result, it is possible to gain insight into someone’s presumed interests, identities, attributes or qualities without their knowledge or participation.

Such detailed and comprehensive profiles may or may not be accurate or fair. However, increasingly such profiles are being used to make or inform consequential decisions, from finance to policing, to the news users are exposed to or the advertisement they see. These decisions can be taken with varying degrees of human intervention and automation.

In increasingly connected spaces, our presumed interests and identities also shape the world around us. Real-time personalisation gears information towards an individual’s presumed interests. Such automated decisions can even be based on someone’s predicted vulnerability to persuasion or their inferred purchasing power.

Automated decisions about individuals or the environment they are exposed to offer unprecedented capabilities to nudge, modify or manipulate behaviour. They also run risk of creating novel forms of discrimination or unfairness. Since these systems are often highly complex, proprietary and opaque, it can be difficult for people to know where they stand or how to seek redress.

Why this matters

In the future people will be scored in all aspects of their lives, societies will be managed invisibly, and human behaviour will be under the control of the few and the powerful. If data from different walks of life may feed into consequential decisions, this can result in chilling effects. We would not want a world where people have to pre-emptively self-censor their on-line and off-line behaviour because the data it generates might be used against them.

When our profiles are used to make consequential decisions about us and our environment, this can have significant consequences for individuals – from credit scoring to predictive policing, from making hiring decisions to nudging and shaping human action.

Ultimately, an environment that knows your preferences and adapts itself according to these presumed interests raises important questions around autonomy and the ethics of such manipulations. Personalisation of not just information but also our perception of the world around us will become increasingly important as we move towards connected spaces, like smart cities, but also in augmented, and virtual reality.

What we would like to see

We would like to see a world in which individuals are not subjected to arbitrary, discriminatory or otherwise unfair decisions that they are unable to challenge, correct or question the grounds and the process.

We would also like to see a world in which there are no secret profiles of people, that people don’t have to fear that their profiles lead to decisions that limit their rights, freedoms and opportunities.

Individuals should have full access to their data profile. Subject access requests will disclose personal data including the categories and profiles, including derived and inferred data.

Protections should be generated around group profiling, which is often outside the realm of data protection safeguards on profiling.

What this will mean

Data protection frameworks around the world need to address the risks arising from profiling and automated decision-making, notably, but not limited to, privacy.

People will know when automated decision making is taking place and the conditions to under which they are taking place, and have the right to redress.

Essential reform actions

Loopholes and exemptions in data protection law around profiling must be closed. Not all data protection laws recognise the use of automated processing to derive, infer, predict or evaluate aspects about an individual. Data protection principles need to apply equally to data, insights, and intelligence that is produced.

In addition to data protection laws, and depending on the context in which we are seeing automated decision-making, additional sectoral regulation and strong ethical frameworks should guide the implementation, application and oversight of automated decision-making systems.

When profiling generates insights or when automation is used to make decisions about individuals, users as well as regulators should be able to determine how a decision has been made, and whether the regular use of these systems violates existing laws, particularly regarding discrimination, privacy, and data protection.