Profiling and Automated Decision Making: Is Artificial Intelligence Violating Your Right to Privacy?

News & Analysis
Profiling and Automated Decision Making: Is Artificial Intelligence Violating Your Right to Privacy?

Image source creative commons.

The below piece was originally posted on the UNRISD site here.

How AI is affecting our human rights

Artificial Intelligence (AI) is part of our daily lives. Its many applications inform almost all sectors of society: from the way we interact on social media, to the way traffic flows are managed in cities; from access to credit and to social services, to the functioning of our ever expanding number of devices connected to the internet.

AI is affecting our human rights, in positive and negative ways. And particularly so when it comes to privacy. As the UN High Commissioner for Human Rights noted in August 2018 “big data analytics and artificial intelligence increasingly enable States and business enterprises to obtain fine-grained information about people’s lives, make inferences about their physical and mental characteristics and create detailed personality profiles.”

AI applications need data, more often than not data about us (that is, personal data), to operate. AI supports and relies on the data-driven models maximizing the amount of information on individuals to identify and track individuals, to infer their identity and predict their behaviours.

This includes our activities in public spaces, whether physical or digital. For example, AI is used to analyse data collected by CCTV cameras to carry out facial recognition; it works with location data emitted by our own devices as we go about our daily lives. And in the context of the digital information environment, AI is used to track individuals online, including to analyse data produced by our social media interactions (social medial monitoring.) Physical and digital spaces are increasingly interconnected. For example, FindFace, a face recognition application, allows users to photograph people in a crowd and compares their picture to profile pictures on social networks, identifying their online profile with 70 percent reliability.

AI applications play a growing role in scoring systems which shape access to credit, employment, housing or social services. AI-driven applications are used to automatically sort, score, categorize, assess and rank people, often without their knowledge or consent, and frequently without the ability to challenge the outcomes or effectiveness of those processes.

AI produces profiles and decisions that are based not just on data that we have consensually submitted, but on data obtained without our consent or knowledge. Further, no matter where the data comes from, we often do not even realize for what purposes it is used.

As noted by the European Data Protection Regulators, “advances in technology and the capabilities of big data analytics, artificial intelligence and machine learning have made it easier to create profiles and make automated decisions with the potential to significantly impact individuals’ rights and freedoms.”

Similar concerns were reflected in a 2017 resolution by the UN Human Rights Council, which noted how “profiling may lead to discrimination or decisions that have the potential to affect the enjoyment of human rights, including economic, social and cultural rights.”

When it comes to solutions, ethical standards surrounding the applications of AI are poorly defined and understood by industry and governments.
 

Fundamental safeguards do exist

However, human rights law and modern privacy and personal data protection standards have developed important safeguards. Three in particular are fundamental in relation to AI.

First, this legal framework is based on some fundamental principles. One of these states that any intrusion into our privacy and interaction with our data requires a legal basis and must be limited to what is necessary and proportionate to a legitimate aim. This must be applied to AI technologies, requiring in particular that data should be collected and processed only for specific purposes and the amount of data processed should be kept to the minimum required for the stated purposes.

Second, data protection laws have quite well-developed standards for transparency and accountability. They provide individuals with a right to information, which they should be able to act upon, including in interactions with regulators and courts.

Companies and public authorities are required to inform individuals about the existence of automated decision making, the logic involved and the significance and consequences of this type of processing for the individual.

Further, some modern data protection law, such as the EU General Data Protection Regulation, introduces an overall prohibition (with narrow exceptions) of solely automated decisions when such decisions have legal or other significant effects.

The idea underlying this principle is to have human involvement in decisions that affect individuals and this should always be the case if automated decisions affect an individual’s human rights.

Third, human rights and data protection law encourages, and in the best cases requires, privacy impact assessment and privacy by design and by default. In a resolution due to be adopted in December 2018 the UN General Assembly recognizes “the need to apply international human rights law in the design, evaluation and regulation” of AI technologies.

These are just some examples of the way existing human rights law should be applied to AI applications to ensure that they do not violate our privacy. As noted by the UN Special Rapporteur on the freedom of expression, AI technologies “challenge[s] traditional notions of consent, purpose and use limitation, transparency and accountability”. We need to rise to these challenges by developing analysis and applications that ensure human rights are effectively implemented when deploying AI technologies.