Building the Global Privacy Movement

Development and humanitarian actors have a mandate to provide assistance and protect people in some of the world's most challenging political, social, economic, and technological environments. Over the past decade, they have been experiencing changes to their relationships with their beneficiaries and affected populations as well as their contexts of engagement including the pressure to be efficient and demands to be accountable, the need to gain and sustain access and proximity to beneficiaries and affected populations, and the urgency and immediacy of the response required, amongst others.

This means mean that they have had to re-think how they continue to effectively deliver their mandates as well as how they design and implement their modes of operations. These new modes of operations, and the new relationships they entail are increasingly mediated, enabled, enhanced, and limited by technologies. This all results in a significantly different set of risks, which currently many humanitarian actors are not prepared for, and must be addressed urgently.

There is no question that advancements in technology, communications and data-intensive systems have significantly changed the way development programmes are delivered and humanitarian assistance can be provided to ensure more people can benefit, more rapidly and more effectively. We are seeing the deployment of mass biometric systems for the registry of beneficiaries and the management of their access to aid, new technologies are being deployed for the delivery of aid programmes including automated decision-making and blockchain, and all of this within a general dominant discourse favouring the need for more data to better assist and respond to needs of beneficiaries. 

In this complex interplay of assessing the benefits and the challenges, it is necessary and urgent to understand how technology is modifying the protection of those assisted, and ultimately how this impact the ability of development and humanitarian organisations to undertake their mandate in an ever-challenging environment whilst abiding by the principle of “do not harm”.

While there is no internationally accepted definition of cybersecurity, the dominant discourse around the world, promoted by policy-makers and government agencies, focuses on international crime, prevention of terrorism and the quest for ever increased surveillance of our communications and data.

Despite the multitude of warnings evident in the continuing global data breaches from poorly secured databases in companies and governments’ networks, leading actors in this field, both private and public, have not prioritised addressing the root causes of insecure systems and governments have continued to adopt policies and pass laws that undermine cyber security as a whole and therefore place human rights at risk.

Data protection law is going through another revolution. Established in the 1960s and 1970s in response to the increased use of computing and databases, re-enlivened in the 1990s as a response to the trade of personal information and new market opportunities, it is now becoming much more complex.

New challenges are also emerging in the form of new technologies and business models, services, and systems increasingly rely on analytics, 'Big Data', data sharing, tracking, profiling, and artificial intelligence. The spaces and environments we inhabit and pass through generate and collect data from human behaviour. The devices we wear and carry with us, install in our homes, our channels of communications, sensors in our transport and our streets all generate more and more data. 

Data protection frameworks may have their boundaries and new regulatory regimes may need to be developed to address emerging new data-intensive systems, new frameworks nevertheless provides an important and fundamental starting point to ensure that the fundamental strong regulatory and legal safeguards are implemented to provide the needed governance frameworks nationally, and globally, before we see ourselves the subject of data exploitation.

Financial institutions are collecting and analysing a growing amount of data about us, in order to judge us and make decisions on things like creditworthiness. Increasingly, financial services such as insurers, lenders, banks, and financial mobile app startups, are collecting and exploiting a broad breadth of data to make decisions about people. This has a huge variety of possibilities: the tracking of cars to charge for insurance; reading the contents of your text messages to determine if you’re suitable for a loan; seeing who your friends are to determine your interest rate. The financial sector is changing in how it uses data. This has to be of particular concern when it affects the poorest and most excluded in societies.