Data protection: a piece of the puzzle to “do no harm” in the digital age

PI presents its contribution to the "Handbook on Data Protection in Humanitarian Action" (2n Ed).

Key points

- Data protection is a piece of the puzzle to help humanitarian organisations ensure they uphold their humanitarian principles and values.

- PI's campaign for legal and technological solutions to protect people and their data from exploitation includes safeguarding the dignity and autonomy of those seeking humanitarian assistance.

News & Analysis
Image produced by the ICRC and the BPH.

Image produced by the ICRC and the BPH.

Humanitarian organisations are defined by their commitment to core, apolitical principles, including: humanity, impartiality, neutrality, independence and to “do no harm”.

And yet the ways data and technology are being used in the world today are often far from being apolitical, humane, impartial, neutral and independent, and in many instances many actors are using technology to exploit people, failing to protect and empower many individuals and communities - in particular those most in need of protection.

We must demand and enforce legal and technological solutions to protect people and their data from exploitation.

Data protection in humanitarian action

As increasingly recognised now by many in the humanitarian sector, data protection is a piece of the puzzle to help them ensure they uphold their humanitarian principles and values by safeguarding the autonomy, dignity and rights of the people they endeavour to assist.

The second edition of the Handbook on Data Protection in Humanitarian Action jointly published by the Brussels Privacy Hub and the Data Protection Office of the International Committee of the Red Cross (ICRC) paves the way for this process by presenting how personal data can be and needs to be protected when new technologies, services and products are being deployed in the humanitarian sector.

The handbook highlights what humanitarian organisation thinking about or already deploying data processing activities and technologies in their work must consider, assess, and enforce to ensure that in their humanitarian efforts they protect personal data as doing so is an integral part of protecting an individual’s life, integrity and dignity which is of fundamental importance for humanitarian organisations.

Building on from the first edition which covered a variety of topics from biometrics to data analytics, mobile messaging apps, cloud services and cash transfer programmes, the second edition contains five new chapters including on digital identity, connectivity as aid, social media, artificial intelligence and machine learning, and blockchain. These technologies have all attracted much debate and attention in recent years, and it is no surprise that the humanitarian sector, like many others, is considering using them.

Addressing emerging issues for the sector

The threats created by the use of data and technology in the humanitarian sector as presented in the handbook mirror many issues which PI has been exploring in terms of how, if they are left unregulated, the use of data and technology can threaten our fundamental rights and freedoms.

Digital identity

Digital identity systems raise some key issues when it comes to:

  • exclusion, when there are those with one and those without;
  • exploitation, as they link together diverse sets of information about an individual, and allow tracking and profiling;
  • and surveillance, by giving a 360-degree view of the person.

All three of these are made worse by function creep - the spread of an identity system to more and more aspects of people’s lives. The UN’s work through it’s Legal Identity Taskforce highlights the emphasis being placed on this in the humanitarian sector. How does the way digital identity systems are used and designed impact the ability of people to prove who they are, and their entitlement to access lifesaving assistances - be it food, healthcare or financial assistance - and how do we ensure lack of identity never results in a denial of services?

Connectivity as aid

Our communication ecosystem is complex and multi-layered with a variety of stakeholders operating within it, each with their own roles, responsibilities, and interests. As digital communications grow, companies and governments continue to seek new ways of getting access to content and metadata. When faced with powerful actors with the resources and techniques to undertake surveillance, humanitarian organizations will not have control over the whole connectivity chain and, therefore, cannot guarantee to protect individuals against having their data and metadata misused. How are the arbitrary and unprecedented nature of traditional and new forms of techniques, as well as the insecurity and vulnerability by default of services and devices, impacting the ability of provide access to safe and secure connectivity to recipients of aid?

Social media

Whilst social media provide unprecedented opportunities to connect people to each other, communicate and access information, companies are designing their platforms and services with the primary aim of generating profit. The content and metadata generated through our online interactions is not only used by companies, it has also facilitated social media monitoring creating new forms of surveillance and exploitation. Our digital data trails and the content we share are being used to track where we are, what we do and with whom, as well as to predict our views, our beliefs, and our behaviour. How does the weaponisation of our online spaces (social media and messaging apps) and our devices put people and communities on the move at risk, as they seek to communicate and find information about where to find protection?

AI and machine learning

AI and its applications are becoming a part of everyday life and yet because of the nature of how the technology works, their use remains largely unregulated and opaque and the alleged benefits remain to be evidenced. Concerns exist around the reliance on mass processing of large amounts of data as well as the opacity and secrecy of the systems. AI-driven identification, profiling, and automated decision-making may also lead to unfair, discriminatory, or biased outcomes. People can be misclassified, misidentified, or judged negatively, and such errors or biases may disproportionately affect certain groups of people. With the automation of decision-making in the humanitarian sector, how is the autonomy and agency of people seeking humanitarian assistance impacted, what data is used to make decision about them, by whom, and how?

Equipping the humanitarian sector with tools

Like many, the humanitarian sector is sometimes ill-equipped to understand the intricacies of these issues. This makes it hard to even take the first step of deciding whether or not to embrace innovation in their programmes, as it requires assessing the justifications, and balancing against a clear understanding of the implications on individuals’ lives, integrity and dignity, before taking subsequent steps to either mitigate risks by adopting a series of safeguards both legal and technical, or avoid them by deciding not to proceed with the adoption of that particular technology.

The need to go through this assessment and safeguarding process is not a trivial one. When you consider how data is exploited by industry and governments, and the implications for our democratic society and our agency and autonomy and contextualise them within the ecosystem in which data and technology is deployed for the delivery of humanitarian aid, this means that more and more people receiving humanitarian assistance face the potential of being exposed to unexpected threats. And unless measures are taken to address some of these issues and mitigate some of the risks, it will be increasingly challenging to sustain the impartiality, neutrality and independence of humanitarian action, and to protect people and safeguard their autonomy and dignity

This is why we welcome the efforts of the ICRC and other humanitarian organisations involved in this initiative. As with any regulatory framework the key is enforcement and accountability, so we look forward to seeing the handbook informing and shaping the decisions humanitarian organisations make to ensure that as they embrace innovation to support their laudable efforts, they take these necessary measures to not only protect themselves but also the affected populations they assist today and for future.

Why is PI involved in this work?

Why is a privacy organisation working with the humanitarian sector, and why does it matter? We may seem like strange bedfellows, but today’s ever-growing digital world means that, more and more, people who receive humanitarian assistance are being exposed to unexpected threats as outlined above.

And whilst we are all impacted by these data exploitative and surveillance policies and practices, the way we are varies greatly and those in already vulnerable and precarious situations experiences these in ways that the impacts are tragic, and sometimes a matter of life or death.

This is why as we campaign for legal and technological solutions to protect people and their data from exploitation, we will be continuing to monitor the deployment of technology in the development and humanitarian sector to ensure that those in need of humanitarian assistance can access it and their rights and dignity are safeguarded today and in the future.

Sign up to join our global movement today and fight for what really matters: the freedom to be human.