Automating the hostile environment: uncovering a secretive Home Office algorithm at the heart of immigration decision-making

“IPIC” ("Identify and Prioritise Immigration Cases”) is an algorithm utilised by the UK Home Office that automatically identifies and recommends migrants for particular immigration decisions or enforcement action. After a year of submitting Freedom of Information Act requests, we finally received some information on this secretive AI tool used to decide the fate of migrants. 

News & Analysis

Artificial intelligence decision making systems have in recent years become a fixture of immigration enforcement and border control. This is despite the clear and proven harmful impacts they often have on individuals going through the immigration system. More widely, the harms of automated decision making have been increasingly there for all to see: from systems that encode bias and discrimination, as happened in the case of an algorithm used to detect benefit fraud in the Netherlands, to inaccurate software that had horrific consequences for sub-postmasters caught in the post-office horizon scandal.

All the way back in 2020, the first warning signs were apparent in the immigration context when the Home Office agreed under the threat of litigation to withdraw a visa streaming algorithm that discriminated between certain nationalities. This year we, at Privacy International, have been investigating AI decision-making systems used by the Home Office. What we have found is that a number of highly opaque and secretive algorithms permeate immigration enforcement and play a role in decisions that can have life changing consequences for the migrants subject to them. This is all without any information being provided to migrants about the existence of the algorithm or how it uses their personal data.

The most concerning tool, given how far its use appears to extend across the immigration system, we uncovered information about so far is called, Identify and Prioritise Immigration Cases” (known as IPIC). This automatically identifies and recommends migrants for particular immigration decisions or enforcement action by the Home Office. It took a year of submitting freedom of information requests and eventually complaining to the ICO, for the Home Office to disclose information about the functioning of this AI tool. But even now despite being given some information the Home Office still refuses to provide us with explicit information about the actions and decisions in relation to which the tool provides recommendations.

The basis for this refusal has consistently been that migrants would use this information to 'game' the system by submitting false information to engender favourable decisions. It is illogical to suggest that a system could be 'gamed' on the basis of high level information about the nature of the recommendations the algorithm generates as this does not explain how the tool processes information to get there. But this assertion also does something more pernicious. It is an extension of a wider narrative pushed by successive governments that migrants are abusing and gaming the immigration system, which was most recently encapsulated by the former Home Secretary, James Cleverly, saying that suicidal migrants detained in the documented poor conditions of RAF Wethersfield were lying about their mental health.

Despite the obfuscatory approach taken by the Home Office when disclosing information to us, it is clear from internal documentation we have now seen so far that the algorithm is used across the immigration system. Training materials provided to Home Office officials refer to the algorithm making recommendations about EU Settlement Scheme cases, conditions which individuals on immigration bail are subject to, and deportations, referred to as 'returns'.

These are all decisions that if made incorrectly can lead to individuals suffering catastrophic harm and, in these circumstances, meaningful human review of the algorithm's recommendations is more important than ever. But from what we have seen in the disclosure, the algorithm is designed in ways that push Home Office officials towards accepting its recommendations. For example, officials have to provide an explanation if they reject a recommendation whereas this is not the case if they accept it. Similarly, a rejected recommendation can be changed for longer than an accepted one. In view of punishing targets and casework backlogs, what is to stop officials rubberstamping recommendations because it's so much easier and less work than to look critically at a recommendation and reject it?

Long Read

GPS ankle tags are used around the world in the criminal justice and immigration enforcement sectors, to monitor people's location 24/7. How do these tags work? What's the impact on people made to wear them? We've tested some to find out.

Accountability about the design and function of Home Office algorithms for those going through them is therefore needed more than ever to prevent important decisions being left to robo-caseworkers. The disclosures we have seen show that IPIC processes the personal data of migrants across the spectrum of information collected through immigration enforcement activities. This includes everything from data relating to a person's detention, to their health and vulnerabilities, and even information obtained from electronic monitoring where individuals are subject to GPS tracking bail conditions (and much more). If as already happened in thousands of cases, where individuals were listed with incorrect names, photographs or immigration status - there is nothing to stop migrants being wrongly recommended for interventions by immigration enforcement.

There is a way out of this kafkaesque reality where anyone going through the immigration system may not know that their information has been processed by an algorithm let alone that it led to incorrect action being taken in their case. The Home Office must urgently come clean and publish meaningful information about the existence and functions of algorithmic decision-making systems, and its impact on those affected by these decisions.

An oven-ready infrastructure is already in place to enable this - government initiatives, such as the Algorithmic Transparency Recording Standard Hub provide a template for public authorities to provide clear information about the algorithmic tools they use, and why they’re using them.

The alternative is to continue to push a hostile environment and to treat humans as though they are 'gaming' a system rather than living within and trying to progress through highly complex immigration processes where the cards are often stacked against you. Not getting this right will continue to exempt migrants from long-standing rule of law principles that entitle all of us to understand why and how a public decision impacting us has been made, and ultimately to live in dignity. 

We plan to follow up this initial analysis with a detailed review of IPIC and its various functionalities once we have completed our full review of the disclosure we have now received.