The world is being rebuilt by companies and governments so that they can exploit data. Systems are now designed for data exploitation. They remove the capabilities we once had to exert control over our devices. They influence and even determine how data emerges from us and our lives. This reduction in autonomy and control is occurring even as security risks increase and we become increasingly reliant on industry and governments to live our lives.
Increasingly the spaces and environments we inhabit and pass through perpetually generate and collect data from human behaviour, without asking us. This data is used by companies and governments to identify and profile individuals, generate intelligence about us and our societies, to predict and manipulate future behaviour, and to make decisions about everyone and our world.
Public debate normally focusses on personal data that people knowingly disclose. This activity would normally fall within the frameworks of data protection and human rights law, where they exist and are enforced.
We urgently need to look beyond the data we provide knowingly to companies and government. We need to look at data generated by devices in and around our lives. This is data about our conduct and behaviour. And somehow this is data about us that countless companies and governments now hold in vast quantities. We need to understand exactly how governments and industry are wresting of control over our data from us. They are taking away our rights and abilities to determine how it is processed, by who, and why.
Companies and governments are relying less on data we provide and instead are looking at data they can observe, derive, and infer. Data is increasingly observed from behaviour and recorded directly from devices and automatically without our interventions, both online -- for instance through web analytics and metadata-- and in offline spaces -- including through sensors like microphones, and cameras with facial recognition. Derived data can be produced from other data, such as calculating how often someone calls their mother as an indicator of credit-worthiness. Through more complex methods of analysis such as machine learning, potentially sensitive data can be inferred from seemingly mundane data, as we’ve seen through the systems calculating recidivism scores, or inferring a person’s race, nationality, or predicting future health risks.
Without urgent change, data will be used in ways that people cannot now even imagine, to define and manipulate their lives without transparency or accountability. To prevent this, we believe that the following changes are necessary:
- Increases in individuals’ control over their data, that will result in a different way of designing technologies that protect peoples’ autonomy.
- Increased emphasis on security, that may result in more rights and protections for individuals and constraints applied to powerful entities.
- Restraints on potential abuses arising from the vast accumulation of data.
- Limits on how data is used to construct profiles and to make decisions about people, groups, and societies.