A world without data exploitation

The world is being rebuilt by governments and companies so that they can exploit data. 

We campaign for legal and technological solutions to protect people and their data from exploitation. 

Increasingly the spaces and environments we inhabit and pass through perpetually generate and collect data from human behaviour, without asking us.

This data is used by companies and governments to identify and profile individuals, generate intelligence about us and our societies, to predict and manipulate future behaviour, and to make decisions about everyone and our world.

Public debate normally focusses on personal data that people knowingly disclose.

This activity would normally fall within the frameworks of data protection and human rights law, where they exist and are enforced.

We urgently need to look beyond the data we provide knowingly to companies and government. 

We need to look at data generated by devices in and around our lives. This is data about our conduct and behaviour. And somehow this is data about us that countless companies and governments now hold in vast quantities.  We need to understand exactly how governments and industry are wresting of control over our data from us. They are taking away our rights and abilities to determine how it is processed, by who, and why.

Companies and governments are relying less on data we provide and instead are looking at data they can observe, derive, and infer.

Data is increasingly observed from behaviour and recorded directly from devices and automatically without our interventions, both online -- for instance through web analytics and metadata-- and in offline spaces -- including through sensors like microphones, and cameras with facial recognition. Derived data can be produced from other data, such as calculating how often someone calls their mother as an indicator of credit-worthiness. Through more complex methods of analysis such as machine learning, potentially sensitive data can be inferred from seemingly mundane data, as we’ve seen through the systems calculating recidivism scores, or inferring a person’s race, nationality, or predicting future health risks.

Urgent change is needed.

Without urgent change, data will be used in ways that people cannot now even imagine, to define and manipulate their lives without transparency or accountability.  To prevent this, we believe that the following changes are necessary:

  1. Increases in individuals’control over their data, that will result in a different way of designing technologies that protect peoples’ autonomy.
  2. Increased emphasis onsecurity, that may result in more rights and protections for individuals and constraints applied to powerful entities.
  3. Restraints on potential abuses arising from the vast accumulation of data.
  4. Limitson how data is used to construct profiles and to make decisions about people, groups, and societies.
Individuals need to have full insight into their profiles. This includes full access to derived, inferred and predicted data about them.
Data should be protected from access by persons who are not the user.
As nearly every human interaction now generates some form of data, systems should be designed to limit the invasiveness of data analysis by all parties in the transaction and networking.
Individuals should be able to know about, understand, question and challenge consequential decisions that are made about them and their environment. This means that controllers too should have an insight into and control over this processing.
Manufacturers and/or vendors must be responsible for the security and privacy design in the products they manufacture and sell, throughout a clearly identified period.
People must be able to know what data is being generated by devices, the networks and platforms we use, and the infrastructure within which devices become embedded. People should be able to know and ultimately determine the manner of processing.
Individuals should have control over the data generated about their activities, conduct, devices, and interactions, and be able to determine who is gaining this intelligence and how it is to be used.
Individuals must be able to selectively disclose their identity, generate new identities, pseudonyms, and/or remain anonymous.
There should be no barriers to timely fixes in security -- including updates, patches, and workarounds -- particularly considering implications for users of various socio-economic status and citizenship. Security updates should be distinguishable from feature updates.