PI's response to DCMS' consulation on data protection reform in the UK

In advance of the anticipated publication of the Data Protection Reform Bill in the UK, PI publishes here our response to the consultation on data protection reform submitted in November 2021. We focus on the real world impact of the proposed removal of extensive protections, by drawing on examples of PI’s research, investigations and advocacy from around the world.

Key advocacy points
  • PI has a long history of engaging with and supporting data protection policy and legislation around the world and upholding the rights contained therein.
  • The right to privacy and data protection are linked to some of the most important political and heart-searching questions of our time. How can exploitation of the vulnerable be prevented? How does the UK treat its immigrants who bring key skills and prosperity to the country? What safeguards are there against potential corruption of the democratic process by new technologies and their use by political parties and third parties?
  • These are the questions that drive PI’s work everyday. We are therefore disappointed by the framing of Data: A New Direction.
  • At the core of the proposal is the suggestion that data protection is a burden on companies. It appears to be driven by the commercial interests of a few companies who may benefit from weaker rights protection, the result being the proposed loss of many important protections for people.
  • Ultimately, removing protections in such a way means removing incentives for companies to respect privacy. This creates a race to the bottom that will not foster the innovation the government seeks.
  • The proposal is a backward step. For example, innovation (eg. in AI) relies on people sharing data; in order for people to share their personal information, they need to feel confident about doing so.
  • This proposal does not foster trust. A better proposal would be to enforce what we have and improve protections rather than removing them.
Advocacy
Photo by Camilla Bungaard on Unsplash

Now is the time to strengthen not weaken data protection to keep us all safe. Here we outline some edited areas of our consultation response that highlight the impact of the proposed loss or weakening of many important protections:

The proposal to broadening consent and further processing for research purposes:

PI urges caution with regard to provisions that seek to potentially undermine the strict conditions around obtaining consent. The GDPR placed stronger conditions on obtaining consent and in our work we have seen how this is constantly sought to be undermined by various actors. Introducing concepts such as "general" or "broad" consent might inevitably result in people's (sensitive) personal data being used for purposes that go far beyond what they might have originally foreseen.

PI has investigated this issue extensively; we are shocked at how intrusive and harmful data collection has become under the cover of “consent”, including health data that could, under this proposal, be interpreted as “scientific research”. For example, see Your Mental Health For Sale and An Unhealthy Diet of Targeted Ads.
PI’s investigation, No Body’s Business But Mine: How Menstruation Apps Are Sharing Your Data, found that several apps were sharing sensitive health data with third parties, which was not explicit in their privacy policies.

Purpose limitation is one of the core principles of data protection law. Application of the principle ought to consider factors listed in Article 6(4). The question of purpose limitation is intrinsically linked to what one can expect
to be done with their personal data.

For example, in the complaints that PI filed against Clearview AI, together with 3 other organisations across the EU, PI demonstrated that re-use of even publicly available personal data, such as facial images posted on social media or websites, for processing in a biometric database clearly falls outside of such expectations.

UPDATE: Since we submitted these complaints, many data protection authorities have announced they agree with us, including a £7.5 million fine issued by the ICO in the UK, and a €20 million fine issued by the Italian data protection authority.

The proposal to remove the "balancing exercise" for applying legitimate interest:

Legitimate interests of the controller or a third party may provide a legal basis for processing, provided that the interests or the fundamental  rights  and  freedoms  of  the  data  subject  are  not  overriding. The use of this legal basis for processing fundamentally requires controllers to carry out a balancing exercise between the specific interests they seek to protect and the impact of the latter on data subject's rights and freedoms.

Essentially the balancing exercise lies at the heart of using legitimate interests as a legal basis for processing personal data. Depriving legitimate interests of the balancing exercise will in most cases result in processing operations that bear a disproportionate or onerous impact on data subjects' rights.

In its submission before the ICO, PI illustrated how the legitimate interests legal basis esnures a fair processing of individuals' personal data as well as how facial recognition companies often abuse it by failing to take the implications of their processing operations for data subjects' rights into consideration or by engaging in disproportionate data exploitation practices.

AI and Machine Learning: The proposal to remove Article 22 of UK GDPR:

The protection afforded by Article 22 of the UK GDPR cannot be overstated, and the suggestion of its removal constitutes a grave threat to individuals. Article 22 is designed to guard against the risks of automated-decision making. These risks are identified by the ICO as follows:

- Profiling is often invisible to individuals;

- People might not expect their personal information to be used in this way.

- People might not understand how the process works or how it can affect them.

- The decisions taken may lead to significant adverse effects for some people.

In relation to the risk of errors, government must consider that Article 22 exists to guard against mistakes which could be time-consuming and costly to government entities. Here we present some examples from PI’s report, “Benefitting whom? An overview of companies profiting from “digital welfare”:

"We filed a series of FOI requests to four London councils (Ealing, Islington, Camden, Croydon) concerning the London Counter Fraud Hub, a system designed by the Chartered Institute of Public finance & Accountancy (CIPFA) in order to detect fraud in applications for the council tax single person discount. The system was meant to process large amounts of data to identify fraudsters. The system was a cause of great concerns when it was first revealed in the media. With more and more dicussions on algorithmic bias and the revelations that the system had a 20% failure rate, many feared they would see their benefits cut unfairly.”

The proposal to remove the requirement for organisations to undertake data protection impact assessments (DPIAs):

DPIAs are particularly important where there is a risk to the rights and freedoms of individuals, including where the processing involves sensitive personal data, automated decision-making, profiling, or monitoring of public spaces. An impact assessment requires, as a minimum:
• an assessment of the necessity and proportionality of the processing
• the risks to individuals
• how these risks are to be addressed.


It is contradictory - and self-defeating - to promote transparency requirements for public bodies and government contractors that use algorithms and decision-makers in paragraph 290, and at the same time remove the requirement for a DPIA as is proposed.

Further, an array of digital technologies are being deployed in the context of immigration and border enforcement and administration which gather and process increasing amounts of data. This includes aerial and space surveillance practices and GPS location tracking in immigration bail.

The importance of a DPIA was more recently brought to the fore as a result of the legal challenge against the automated visa streaming tool used by the Home Office. In response to the legal challenge, the government committed to redesigining the algorithm behind the visa streaming tool, and similarly committed to undertake a DPIA for the interim process it intended to use as a replacement. In other words, the Home Office understood the importance of a DPIA in ensuring that any future system observed individual rights and freedoms.


Lastly, DPIAs enable civil society organisations to scrutinise, inform and advise on proposed automated systems before, during and after their implementation. In preparing its response to the 2021 National Fraud Initiative consultation, PI extensively drew upon the published 2018 DPIA on the National Fraud Initiative. This enabled PI to understand and in turn explain to others the functioning of the National Fraud Initiative, as well as produce a robust response to the consultation.


To highlight a recent global example on the importance and weight of DPIAs, in Kenya the rollout of the National Integrated Identity Management System (NIIMS) and processing of data was halted by the high court because the relevant government department did not complete a DPIA.

The proposal to introduce a fee for Subject Access Requests:

This would be very problematic, particularly for the gig-economy sector and other lower income individuals. It is only through subject acess requests they are able to obtain information that numerous companies like Deliveroo, Uber, Amazon etc collect about them. Data collection by delivery companies is very opaque and the workers are not told how much data is collected about them, how this data is later used. The only way for them to protect themselves is through data subject acess requests. However, considering the fact that gig-economy workers tend to be much lower paid, fees for data subject access request will have a significant negative impact on their ability to protect their rights. This is very concerning in light of the inherent power imbalance that exists between delivery platforms/employers and their workers. See PI's case study, The Gig Economy and Exploitation.

The proposal to remove consent for analytic cookies:

We disagree with the framing of the proposal that analytics cookies are harmless and consent notifications are bothersome for users.


“Analytics cookies and similar technologies” are currently a gateway to personal data collection and processing for micro-targeted advertising, and much more. PI’s research into data collection from mental health websites revealed that answers to depression tests were shared with third parties as a result of these technologies being blindly deployed, without a real assessment of how much data they can collect and for which purpose. Our investigation into diet ads online revealed similar issues.

Given the complexity of online advertising and its heavy reliance on tracking and other invasive data collection processes, removing the need for consent would open a door to indiscriminate surveillance practices by private companies. Our devices and the web are already full of tracking and spying technologies and consent is currently the only protection that users have at their disposal to somewhat limit how they are being tracked and monitored.

The question posed here should not be about removing consent requirements, but rather what can be done to reign in such gratuitious data collection in the first place.

The proposal to remove "soft opt-in" to non-commercial organisations, including political entities:

We have concerns with how the “soft opt-in” regime is failing in the context of commercial organisations and would not recommend it is extended, particularly to political parties (see next section).

“Soft opt-in” is another word for manipulation of users into agreeing to something they don’t see or understand. This leads to anger and frustration when they are later targeted on the basis of invisibly collected data. Rather than allowing soft opt-in, we should work towards models of information and consent that enable organisations to clearly explain what they seek to do with people’s data - so that when the purpose is clear, valuable and not harmful, people are able to consent in full knowledge of the consequences and in support of the processing aims.

To illustrate harm that comes from people unknowingly giving consent for their personal data to be shared, consider the example of Bounty UK Limited.

In April 2019, Bounty were fined £400,000 by the UK’s Information Commissions Office for illegally sharing the personal information of mums and babies as part of its services as a “data broker” between 1 June 2017 and 30 April 2018.

Bounty collected personal data from a variety of channels both online an offline: its website, mobile app, Bounty pack claim cards and directly from new mothers at hospital bedsides.

The ICO’s decision named only the four largest recipients of the data collected and shared by Bounty, out of 37. One of these companies was Sky - Bounty provided Sky over 30 million records.

In 2021, PI wrote to Sky to ask what actions they had taken to locate the data received from Bounty and whether they deleted it, if they had attempted to notify any affected people, or if they had changed their internal policy or practice with regards to receiving third-party data.

Sky refused to answer PI’s questions, saying “due to both passage of time and the confidential nature of the information being requested, we are not able to respond to your questions”.

It remains unknown whether and how the data that Bounty collected and shared is continued to be used to profile and target those 14 million mothers and their babies today.

PI has investigated the use of personal data in political campaigning since the run up to 2017 Kenyan elections and the involvement of a then little known company called Cambridge Analytica. We have repeatedly raised concerns about the use of personal data in political campaigning: the lack of transparency and impact on privacy of gratuitous data collection, profiling and targeting of messages/adverts.

We must address that political parties use consultants/third parties/ "representatives" for campaigns/communications. It is extremly unclear how they are using personal data and this needs strong data protection and enforcement. PI is calling for urgent reform of the use of personal data in political campaigning, stornger protections and enforcement, not less.

The proposal to extend circumstances where public and private bodies can lawfully process health data:

While it is undisputed that there are circumstances in which health data may be lawfully and legitimate processed by public and private bodies during public health emergencies, it is essential that there is full transparency on

(i) the nature of the relationship between those actors, and

(ii) the data processing activities pertaining to each of the actors involved in the handling of health data. Recent history in the UK shows that this is rarely the case.

In recent years, PI has investigated contracts between the NHS and private actors Palantir and Amazon. In the case of Palantir, and as PI reported, the limited documents disclosed by the government in relation to its contract with Palantir are unclear on the conditions limiting Palantir’s access to data after the partnership ends. According to those disclosed documents, Palantir is permitted to undertake any processing activities it deems useful, making function creep a real concern.

Another recent example is the contract between the National Health Service and Amazon, the full disclosure of which PI pursued by way of a complaint to the ICO which was partially granted. The fact that civil society organisations are essentially left with no other option but to raise a complaint with the ICO in order to access public-private contracts not only reveals poor transparency standards, but is also a reflection of the level of resources required to effectively scrutinise public-private partnerships.


Against this background, while any clarification from the government is welcome, it should not merely serve government or commercial interests: it should be used to inform civil society.

The proposal that data subjects attempt to resolve complaints directly with data controller before lodging a complaint with the ICO:

At the moment, no such requirement is placed upon data subjects. Introducing this will have a negative or disproportionate impact on data subjects’ right to seek remedy for any infringements of their data protection rights.

Engaging with companies can be daunting and time consuming. A data subject may not always know who the controller is as certain ecosystems are shrouded in opacity. PI has repeatedly called upon regulators in the EU and globally to investigate and take enforcement action against adtech and data brokers because of this.

As PI research has illustrated, it can be extremely difficult for a data subject to obtain answers from data controllers, either through companies not responding to requests or seeking to evade their GDPR obligations.
In many of our investigations we have not received any responses at all following our submission of DSARs.