Why new profiling protections are needed

In the future people will be scored in all aspects of their lives, societies will be managed invisibly, and human behaviour will be under the control of the few and the powerful. If data from different walks of life may feed into consequential decisions, this can result in chilling effects. We would not want a world where people have to pre-emptively self-censor their on-line and off-line behaviour because the data it generates might be used against them.

When our profiles are used to make consequential decisions about us and our environment, this can have significant consequences for individuals – from credit scoring to predictive policing, from making hiring decisions to nudging and shaping human action. 

Decisions can be discriminatory, unfair, and/or inaccurate. On the one hand, inaccurate or systematically biased data can feed into profiles, which may lead to biased or discriminatory outcomes. At the same time, the process of profiling itself may generate data that is inaccurate. Individuals can be misclassified, misidentified or misjudged, and such errors may disproportionately affect certain groups of people. In fact, profiling creates a kind of knowledge that is inherently probabilistic. 

Human intervention over consequential decisions is often proposed as a possible response. Consequential decisions are decisions that produce legal or similarly significant effects. Automated decisions are decisions that have been made without any form of meaningful human intervention. Human intervention is only meaningful if the human intervening can critically assess how a system has made a recommendation and is authorised to decide against it. For instance, if an individual is assigned a risk score and this score shapes a decision but the person who is making the decision cannot critically assess the score, the decision de facto automated.

Ultimately, an environment that knows your preferences and adapts itself according to these presumed interests raises important questions around autonomy and the ethics of such manipulations. Personalisation of not just information but also our perception of the world around us will become increasingly important as we move towards connected spaces, like smart cities, but also in augmented, and virtual reality (VR).

Relevant case studies
-    https://www.privacyinternational.org/node/757 Financial innovations will mean that the way credit and scoring is done of people into the future will be more invasive and less accountable, with the goal and/or consequence of shaping human behaviour. 
-    https://www.privacyinternational.org/node/789 Super-apps generate more data on more human activity, today for advertising, and tomorrow for other scoring.
-    http://www.privacyinternational.org/node/745 Predictive policing means existing data will inform future of policing.
-    http://www.privacyinternational.org/node/737 The poor and discriminated groups and classes will be further discriminated and classified in the future.