Protecting the digital beneficiary

News & Analysis
Protecting the digital beneficiary

This piece originally appeared here.

We are much more than our physical selves. We are also digital. Every moment we generate more data. Although sometimes this data is under our control, increasingly it is not. This uncontrolled data—this metadata—is often generated as a result of our interactions, movements, sentiments, and even our inaction. Despite being beyond our control, our metadata is still accessible to many. Hardly a day goes by without a news story or global event involving data: a breach of a company that processes metadata (e.g., Equifax), errors in sharing too much metadata (e.g., Facebook and mobile phone companies) or too little (e.g., Commonwealth Bank fine), influencing elections (e.g., converting metadata into political intelligence) and waging war. (e.g., drone strikes). But wait, let’s step back a bit—before we try to solve the world’s problems, let’s try to address the problem that arises for the humanitarian sector and the protection of beneficiaries.

Protecting the digital beneficiary—constituted of data beyond the beneficiary’s control—is even trickier for the humanitarian sector than protecting the physical person. While the sector’s organizations and institutions have become experts on the latter, there is so much to learn about protecting the digital person. No institution we can identify is doing this well, and few sectors must do so with such urgency. Despite much excitement in the sector for digitization, we aren’t yet seeing the same zeal for protection.

The Digital Person

Yes, a Digital Person sounds so archaic and sci fi at the same time. In its more banal form, answer this question: when you submit a CV, how is anything you claim actually verified? Perhaps we take it on trust that you aren’t lying. Or we seek additional data to verify: we contact the university to check if you graduated from there; we check past employers to see if you worked there and if you were at all problematic. If you’re dealing with diligent employers, they may actually be making phone calls to check up on the claims you make. This real-world verification takes time and effort, contacting and recording what others say.

Version 0.1: Data and metadata speak with the individual

But more likely, and increasingly so in the future, employers will check digital registries published by universities (possibly on blockchain?) and check social media accounts (to ‘verify’ more things than I mentioned above, I am sure). This type of information is published, accessible and doesn’t require as much effort. And what’s most interesting, you—the applicant—may not even know the employer is doing all of this, it’s beyond your control. Friends and contacts on social media, how you conduct yourself on social media, financial and credit records—all this data is potentially accessible without your control, or even your say. Most likely, the data is being processed by employers without you knowing it’s happening or what they’re doing with it.

Now consider the individual in the environments within which you work, and the types of verification that are going on across the organizations in your sphere. Much of that verification is probably currently ‘physical’ and analogue: asking people, hearing stories, and just treating people like humans. In the future—if not already—we will also be using (possibly as a replacement) digital verification. Why ask someone for their story and experiences when the information can be read instead in data stored in databases? This is already happening to some extent. Look at how we check beneficiaries’ identities. We don’t look at them, we look at their cards and, with increasing frequency, their biometrics.

Everything I’ve said above is happening now, whether for the candidate for a job or for protection. Let’s call that The Digital Person Version 0.1. We’ve already started treating people as though they are digital, and we’ve done practically nothing to apply any meaningful digital protections. The data is not kept secure, the data is not really verified, and the data speaks louder than the individual. While this may sound far-fetched to some, key players in the humanitarian sector are already speaking of a ‘data-driven’ approach to protection.

Version 1.0: Data and metadata speak instead of the individual

In the future, the digital person will not be relied upon to ‘verify’ the stories of the individual—they will be relied upon to tell the stories. And the Digital Person 1.0 (and please be patient with me with the ridiculous numbering system because you can already see where this is going) is one that the person has no control over disclosing.

Rather than see ourselves as 1s and 0s personified trying to deal with physical things, imagine beneficiaries as branded and only allowed to speak when spoken for. That’s the Digital Person 1.0 that has ‘metadata’ speaking for the individual. That is, rather than asking any questions, why not just search the metadata—data about data transactions they undertake? Why ask about family relations when you can check their most recent and frequent calls? Why ask about their financial status and needs when you can ask the banks you partner with for financial status and check the status and expenditures under the Cash Transfer Programmes? Why ask about the route they took to reach you when you can ask the domestic telephone company for all the data about the movement of the phone they are carrying?

The beauty of that data is that it hardly ever lies: the individual cannot control or prevent the creation of metadata. It’s beyond the individuals’ ability to simultaneously use and prevent their phone from communicating their locations. It’s beyond the ability of the individuals to prevent the banks from generating detailed transactions histories. And, unless people don’t want to communicate at all, it’s impossible to stop the phone and the phone company from generating logs of calls made, and to whom, and for how long, and how frequently.

In a sadly basic example of the abuse that may arise, on May 30 the Washington Post wrote* about how any mobile phone on the planet can be tracked using decades-old flaws in the basic infrastructure of mobile phones, that nobody has bothered to fix. Now, if we keep that infrastructure—upon which we rely on every millisecond of every day—with that level of insecurity, can you imagine how we treat the data?

But all of this is already in place. The financial sector, for example, has long used transaction data to create credit profiles. They’re now looking at metadata too. We have already seen how some financial products are starting to decide whether you are credit-worthy, not based on what you say, but on how often you call your mother (page 30).

Version 2.0: The person the humanitarian sector helps construct

The Digital Person 2.0 is the one that the humanitarian sector is excited about. This is the world where your phone data and your financial transaction data just isn’t meaningful enough because it doesn’t capture your interests and your intents with sufficient clarity. Plus, that data tends to be regulated under some data protection laws, or at least for Europeans. But Web 2.0 (ugh, that terrible term) began to generate more metadata, linked with vast amounts of other data generated online such as web surfing, friends and other relationships (both known and unknown), purchasing habits—and start telling a whole different picture—or enhanced picture that makes social media companies worth billions of dollars. For instance, in 2017 Facebook in Australia was presenting how it could identify teens who feel ‘worthless’.

Now think about what Facebook can do, on its many services, with the data you generate. And ask yourself what insights Facebook gets as a result of the interactions you have with other people.

But it is Data for Good!

You already know about the Digital Person 2.0 because you’ve heard the virtues. Often captured under the concept of Data for Good, it is the idea sold by data companies—including social media companies—and they are selling it to the humanitarian sector. Think of every time that Facebook has approached you with data that could help you do your job. Think of the people using social media services to contact you, or who you are contacting as part of your job. Think about how much, or little, control you have over that data to begin with. Then think about how Facebook was caught providing millions of data points to companies like Cambridge Analytica with no safeguards. And, think what Cambridge Analytica said it was doing with that data, and think about what dozens, if not hundreds, of other firms in this game were doing.

Of course, Facebook and others want to sell the idea of Data for Good. It helps launder their business model. And it needs cleaning. Facebook was caught recently collecting vast amounts of telephone records from Facebook users without meaningful controls against this practice (particularly in older Android phones, which I bet most of the people who are seeking protection are using). Is that amongst the data that Facebook is offering to ‘share’ with the humanitarian sector under the aegis of Good?

As recently as May, a tone-deaf Facebook had its annual conference, F8. At the conference it announced it wanted to expand its reach into India, Bangladesh and Pakistan to make it easier for people to give blood. It also wanted to help people share first-hand accounts of crises. And, Facebook announced its dating app.

Facebook promises that they are trying to learn from their mistakes. Their head of Social Good was quoted stating:

People come together to help each other in times of need on Facebook, and we build products to make that easier. We’re always listening and learning from our community to make sure we’re building tools they can use to do good in the world.

If Facebook is listening and learning, then this sector had better do the speaking and teaching.

This sector may say and determine what protection means

This is the moment for the humanitarian sector to be heard, and to shape the direction of this world when it comes to protecting the digital dignity of people. Make it known that you are concerned about the dangers around the Digitally Protected Person and that you are interested in making the digital and physical worlds safe for people.

Or, as is already occurring within your sector, you will just continue down the path of ‘data-driven’ protection, that looks less and less at the person with interest, and rather consumes and generates data about them without regard to dignity, humanity, and without ensuring that you are doing no harm.

 

* By clicking on any of the above links from your office computer you will be notifying the web server of your device, basic device information, connection information, without any control over what is generated. If you clicked on the Washington Post article, and I hope you did, you will be notifying the Washington Post that you clicked on an article that I found using my paid account. The link to the Facebook phone call story tells Twitter that you read the article and tells MIT Technology Review that you read the article that came from a Twitter posting by someone else I cannot remember who posted it, but now we are all linked. So, without any simple ability to control that, our digital persons are all linked together. Somewhere someone is making a decision about us all as a result. It’s not paranoia. It’s a business model based on the exploitation of data. And it’s worth billions.