Shedding light on the DWP Part 2 - A Long Day's Journey Towards Transparency

Following the publication of their investigation guide, we asked the DWP how the people they investigate get flagged. The answer? By an algorithm.

Key findings
  • We filed a series of FOI requests to understand how the DWP flags individuals who get investigated
  • The DWP uses an algorithm. Yet when we asked for more details (categories of data, code, etc.) they refused to provide them.
  • The DWP argues that releasing this vital information about the algorithm they use would "prejudice the prevention and detection of fraud and crime."
Long Read

Back in 2019, we read through a 1000-page manual released by the UK Department for Work and Pensions (DWP) describing how they conduct investigations into alleged benefits fraud. While out in the open and accessible to anyone, the guide turned out to be a dizzying dive into a world where civil servants are asked to stand outside someone’s door to decide if they are indeed single or disabled and have to be reminded that living together as a married couple is not an offense. The guide – which describes all of the DWP’s surveillance capabilities – also reveals the relationship that the DWP entertains with the media and how they rely on tabloids to build a narrative that the country is plagued by so-called “benefits cheats” but that the DWP does catch them in the end. You can read our analysis of the fraud investigation guide and find out what you need to know about the DWP’s surveillance practices here.

But there is one thing the guide did not answer: how do alleged cases of fraud get flagged in the first place? In other words, what is the trigger that will lead to someone finding themselves with a civil servant waiting by their door? Using the publication of the fraud investigation guide as a starting point – a welcome step towards transparency from the DWP – we decided to find out more.

What we asked and what they answered

In order to find out more, we decided to file a Freedom of Information request to the DWP. You can find all the FOI requests we filed to the DWP and the DWP’s response at the bottom of this page.

In July 2019, we asked them – among other questions – how cases for potential investigation are brought to the attention of the DWP. But we also wanted to ask them about a programme that was alluded to in the DWP Annual Report and Accounts 2017-2018. The report mentioned on page 63 that the DWP was developing “cutting-edge artificial intelligence to crack down on organised criminal gangs committing large-scale benefit fraud.” We first asked what the name of this “artificial intelligence” was, so we could move on to more specific questions.

For the next five months, we exchanged letters with the DWP in order to obtain answers to our questions. Eventually, in January 2020 they gave us an initial answer.

Replying to the question regarding the “cutting-edge artificial intelligence to crack down on organised criminal gangs committing large-scale benefit fraud” they had very little to say.

Digging further

The first part of the response from the DWP suggested that investigated cases are first flagged by members of the public who inform on their neighbours, relatives, etc… While we do not doubt some people are willing to call the authorities to snitch on a neighbour who is going on holidays while receiving benefits, we suspected this may not be the primary and most regular way cases are flagged, as people who call to inform on others may not always be the most reliable source of information. This is why the data matching and data analysis system they mentioned caught our attention. While data matching and data analysis do not necessarily suggest the use of advanced artificial intelligence, it was clear however that an algorithm of some sort was being used. We therefore asked the DWP:

 

  1. What the name of the system is.  
  2. Which department is in charge of conducting the data matching and data analysis.
  3. If the data matching and data analysis programme was developed in-house or if an external company was contracted to design it.
  4. What categories of data are being used for matching and what categories of data are being used for analysis.
  5.  What criteria/indicators are used by this system to flag someone as likely to be committing fraud.
  6. If they could send us the code of the algorithm that is being used.

We also wanted to find out more about the “cutting edge artificial intelligence” being used. In their previous response they had not told us the name of the programme, despite our asking. The reason we wanted to find out the name of such a programme was to be able to move on to more precise questions in the future. We therefore pushed them further by:

7.    Stressing they had not answered our question about the name of the programme and asking them again.
8.    Asking how the artificial intelligence programme operates when detecting and preventing fraud.
9.    Asking how the artificial intelligence programme is capable of predicting fraudulent behaviour by specific individuals.

Hitting the wall

The questions we asked had one clear goal: shedding light on the technical nature of the algorithm used by DWP, understanding what data is being used and how it is being used in identifying who is investigated for fraud. The DWP’s answer was equally clear: they have no intention in revealing that information, thereby preserving the opacity of the algorithms they use to investigate fraud.

 

Their answers to our questions were as follows:

  1. Could you please tell us if the data matching and data analysis system has a name and if so, what the name is?
     

 

2.     Which department is in charge of conducting the data matching and data analysis?

3.     Was the data matching and data analysis programme developed in-house or was an external company contracted to design it? If the latter, what is the name of the external company contracted to design it?

4.    What are the categories of data that are being used for matching and what categories of data are being used for analysis? Could you give examples?
5.    What are the criteria/indicators used by this stystem to flag someone as likely to be committing fraud?
6.    Could you please send us the code of the algorithm that is being used?

 

Regarding the “cutting-edge artificial intelligence to crack down on organised criminal gangs committing large-scale benefit fraud” that the DWP is developing, as of March 2020, the DWP informed us the programme did not have a specific name. They added that they are “exploring the potential use of techniques such as machine learning and network analysis. However, these are still in development.”

Automating welfare: time for transparency

The response we got from the DWP is particularly revealing in that it displays two sides of the same problem: the complete lack of transparency when it comes to how governments are using algorithms. On the one hand, we have a government body refusing to tell us how the algorithms they use to flag individuals for investigative work because they fear revealing it will facilitate fraud. On the other hand, we have that same government body over-promoting their cutting-edge artificial intelligence in their annual report, only to find that, when pressed on it, the DWP can only admit to the system being in its early development stage at most.

This is particularly alarming because the distribution of welfare is an extremely sensitive topic, and failure to access such assistance can have severe consequences on those seeking it. In the past three years, at least three people have died after the DWP failed to provide them with the benefits they should have received. Those cases are bitter reminders that when the DWP makes mistakes, people’s survival are on the line.

In this extremely sensitive context, we cannot afford opacity. Mistakes always happen, whether they happen as a result of a human error or a machine. But when they do happen, we need processes in place that allow for accountability, responsibility and rectifications. The DWP argues that no one will see their benefits withdrawn as a result of an “automated decision” and that each decision is always made by a human being. Yet, the very fact that decisions to investigate are the result of an algorithm sets the ground for inequality especially when the agency responsibility refuses and/or is unable to provide clarity as to how someone can be flagged as a suspect of fraud in the first place. There have been multiple stories illustrating how algorithms reproduce human biases and prejudices, and racial bias in particular. In the context of benefits delivery, with the populations affected often in vulnerable and precarious situations and exposed to societal prejudices, we need to be acutely aware of how those algorithms work and how the decisions are being made, starting with decisions as to who should be investigated, and on the basis of what information.

It does not mean governments should not use algorithms to help them make informed decisions, but such a process requires a human intervention to make the final decision to assess the proposed course of actions presented by the system. Furthermore, the public is entitled to understand how those algorithms work and why they decide for or against them. Individuals also have the right to know how their data is used by public entities to determine their eligibility to the assistance in the first place, and the scrutiny they are subject to as recipients of such assistance.  

The DWP knows all too well the need for accountability when technology is being used and personal data processed as part of their activities. In June 2020, the Court of Appeal ruled that the DWP’s failure to fix their computer system that resulted in cash loss for benefits recipients was irrational and unlawful.

More transparency will also mean better investment of public money. As Privacy International exposed in the past with the case of the London Counter Fraud Hub, the absence of clarity around the use of automation in government programmes means it may not be clear which programmes are used, which ones are dropped and why. When programmes are dropped it is also essential for the public and policy makers in particular to understand how and why.  

The public is entitled to know what technologies are being developed, what are the ones currently being used, how they work and  how they impact their rights. To this date, despite our efforts we still do not know:

  • The categories of data being used to flag someone as likely to be committing fraud.
  • The relevant criteria used by any fraud-detection systems operated by the DWP.
  • The code of the algorithm.

In this instance, transparency is key for the protection and enforcement of beneficiaries’ rights. Under the Data Protection Act 2018, individuals have a right not to be subjected to automated decision-making when the decision is significant – meaning it entails legal adverse effects or otherwise significantly affects the data subject – unless required or authorised by law. As the loss or suspension of benefits arguably entails serious consequences for the individuals affected, it is crucial that any potential instances of automated decision-making are carefully monitored, as well as the specific workings of any technology likely used for this purpose.

The bigger picture: what welfare system do we want?

Beyond the need for transparency that we have highlighted, discourses around the automation of the welfare state begs one bigger question: what is the welfare state we want?

Unfortunately, as highlighted by the former UN Special Rapporteur on Extreme Poverty in his October 2019 report on technology and welfare, the current trends towards welfare automation focus on punishing and reducing access to benefits. We believe a good welfare system is a system that delivers benefits to all those who need them, with delivery at the core of its mission rather than fighting alleged fraud cases.

The various social protection programmes being deployed as a result of the Covid pandemic further illustrate the increased reliance of data processing and technology for the delivery of such programmes. Yet, surveillance and data collection need not be the price people have to pay to be protected at a time when we all are more vulnerable.

People’s enjoyment of their economic, social, and cultural rights should not come at the cost of being subjected to surveillance, control, and punishment.

The DWP urgently needs to change its approach and remember this before it is too late and irreparable harm is done by ensuring it builds systems that provide the necessary safeguards to protect people and their data, and guarantee they comply with their legal obligations.

In the meanwhile, we will keep on demanding transparency and accountability in our efforts to challenge their existing opaque, unaccountable systems.