101: Integrated Policing

Integrated Policing

What is integrated policing?

Integrated policing is the collection and centralisation of data used for policing purposes. In the era of ‘big data’, companies – often the same companies offering infrastructures for smart cities – are offering interfaces that allow police easier access to datasets. Smart cities are cities where projects are deployed to use the collection and analysis of data to attempt to provide better targeted services to inhabitants.

With the proliferation of surveillance cameras, facial recognition, open source and social media intelligence, biometrics, and data emerging from smart cities, the police now have unprecedented access to massive amounts of data. In some cases, data is used for routine police work including on investigations or responding to crime scenes. However, data is also being used to help police ‘predict’ crimes.

Large corporations such as IBM, Microsoft, Cisco, Oracle, and Palantir offer platforms which allow police to navigate through large datasets to facilitate their investigations and responses. The platforms also aim to facilitate the flow of data collected among various police jurisdictions.

In March 2016, the UK Met Police published a tender looking for a company qualified to provide an “integrated policing solution” to bring together seven databases. Within the tender, they said “the data within the seven systems contains a lot of useful intelligence but because this information is not linked, it's difficult to search and get a comprehensive picture of a person, vehicle, location or anything else that could be useful in the prevention or detection of a crime.” They hoped integrated policing would make their work more efficient saying: “It will mean we can create just one record for every victim, witness, suspect or offender we encounter. Each record will contain all Metropolitan Police Service intelligence and known incidents involving that person and can be used again and again throughout the criminal justice process. […] Our intelligence will be better and more up-to-date, enabling our investigations to be more thorough and improving officer safety.”

Companies are competing to offer services to enhance the efficiency of the police. In New York for instance, Domain Awareness System, a programme built by Microsoft allows police officers who are dispatched to “receive information about other incidents in the area, the crime history in the area, and the criminal record of any suspects.”

IBM provides data visualisation technologies to police, allowing various sets of data to be displayed on a map. For instance, they can visualise emergency calls, and if one is made from a location where cases of domestic violence have been reported, this constitutes information the police can act on. 


What are the risks of integrated policing?

The desire for integrated policing and increasing number of companies profiting in this arena is symptomatic of the data-rich environment in which we now live. There is an increased adoption of technologies by governments, greater access to private sector datasets, the scraping of social media to generate so-called ‘open source intelligence’, the rise of open data, and major investments in smart cities. This data-rich environment redefines how police conduct their work, as they now have access to an ever-growing amount of data, including on citizens who have never been arrested.

Integrated policing is at a basic level a large database containing vast amounts of data, including personal and sensitive information. As with any database, there are risks associated with the data itself, which is always partial in scope and carries within it inherent biases. Keeping data over time without proper data retention procedures also raises questions about the relevance of data which is kept over long amounts of time.

A key risk factor in relation to police databases is the element of bias and discrimination. If police officers are disproportionally inclined to investigate crime in certain areas, where the population is poorer or more ethnically diverse for example, the data will reflect – likely inaccurately – a higher crime rate in that area. Further, in the case of the UK’s Metropolitan Police, they have twice admitted institutional racism, which is highly relevant information to consider when examining the reliability of the data in their integrated policing database.

Moreover, the very nature of companies providing integrated policing technologies and services, tend to focus on location data, surveillance camera footage, and data collected from smart cities initiatives, all of which tend to disproportionally represent street crime over white collar crimes.

Further, the premise that the results of data analysis can be assessed to judge whether, for example, there is evidence of discrimination or to address bias, is flawed. Advanced forms of data analysis through which these judgements are made – often conducted using proprietary software – tend to be highly complex and classified as trade secrets, therefore leading to poor accountability.  

We are not going to stop generating data. Social media platforms are developing in such a way that publicly available information and intelligence is only increasing. Smart cities are only beginning to develop, and are already using technologies that generate a wide variety of data. If significant measures are not taken, law enforcement agencies will have seemingly blanket access to our personal data, and will be able to draw assumptions and conclusions about who we are and our purported likeliness to engage in criminal activity. As budgets are cut and purse strings tightened, we see rhetoric from the police that reliance on technology will help streamlines services and increase spending in this area.

As the data we generate grows, our knowledge of what data we generate and what data is being collected sharply decreases. Smart cities enable access to our data without our awareness - the mere fact of walking in the street or using public transport means the individual enters into an environment where their data is seen as fair game. As the current model of smart cities redefines how we perceive our right to privacy, it is important that individuals are informed and empowered about how the data that is collected is being used.

There are many examples of integrated policing having negative consequences on members of the public. In 2014 in the US, a woman won a civil rights lawsuit against the San Francisco Police Department after they held her at gunpoint, forced her on her knees, and detained her for 20 minutes. Her car had been wrongly identified as a stolen one by the automatic number-plate recognition readers. This is one of the many examples of mistakes that can occur when police uncritically act on data-driven decisions.

In the UK, the Daily Telegraph revealed in 2012 that at least 20,000 people had been wrongly labelled as criminals by the Criminal Records Bureau (CRB – now known as Disclosure and Barring Service) since its creation in 2002. The CRB role is to provide background checks for sensitive jobs and voluntary work (including working with children or in healthcare)), as well as for adoption and foster care. Errors came not only from the CRB but also other agencies working on the background checks, including the police and education officials.

A US Federal Bureau of Investigation (FBI) programme is another key example of this downward spiral. The FBI’s biometrics programme ‘Next Generation Identification’ could contain up to 52 million pictures of people’s faces, as well as iris scans and finger prints. Little is known about the programme but it appears data of non-criminals will be stored in the same database as criminals. Thus, any time an employer requests a picture and a fingerprint for a background check, this information will be stored by the FBI. The FBI is now planning to include pictures taken while on duty in the database.


What are some of the companies selling integrated policing infrastructure?


IBM is a smart city infrastructure manufacturer as well as a seller of integrated policing infrastructure. IBM can integrate their smart policing equipment into the broader smart city infrastructure, which includes public transport, public work, and utilities. In 2011, IBM set up an infrastructure for Rio de Janeiro, which was preparing for the 2014 Football World Cup and the 2016 Olympics. The programme integrated more than 30 agencies’ data into one centralised command centre and continues to gather data from sectors across city operations to allegedly prevent crimes, but also to predict natural disasters, by gathering data on the weather forecast and seismic activities.

The interconnection of smart cities and so-called ‘smart’ policing means more data is generated for the police to collect and analyse.



Palantir is a company that offers surveillance technologies and has been exposed by Privacy International for being used as part of an illegal mass surveillance programme in Colombia.

But the American company also sells integrated policing equipment and the city of Los Angeles is one of their customers. In a promotional video, police officers describe how they use Palantir systems to track down vehicles and individuals with very limited information - just a first name and physical description or a partial number plate.

According to Wired, Palantir is also used by a variety of US federal agencies. The US Drug Enforcement Administration, the CIA, the FBI, and the Department of Homeland Security are all using Palantir. The company is expanding outside the US and is targeting primarily the Five Eyes Countries (which include the US, Canada, the UK, Australia and New Zealand).



Oracle provides an integrated policing platform as part a partnership with the company WCC Group, which is a company specialising in “smart search and matching”. The platform features two search engines:

1. The first search engine is called ELISE and specialises in “People Centric Search”. WCC Group, which designed the platform, gives multicultural name matching as an example of their area of expertise. ELISE is designed to store biographic data (name, age, height, weight, etc.) and biometric data (fingerprints, facial scans, etc.). WCC Group selects search algorithms based on their customers’ requirements.

2. The second search engine is called Endeca Information Discovery, and is a search platform that allows users to easily visualise data on a map or chart.

The Oracle brochure provides a use case to show how the two search engines could be combined:

Consider a scenario where a robbery has been committed and only a partial fingerprint could be taken from the scene of crime. In this case ELISE’s matching algorithms would still be able to create a short list of people whose fingerprints share characteristics of the partial print. This search could be performed by a regular officer at the crime scene using a mobile device. The results could be passed back to an investigator and would form a short list of suspects. The investigator is then able to focus the rest of the investigation around a much smaller number of suspects. The investigator would use Endeca to analyze previous crime reports, witness statements and social media to discover if there is information linking any of these suspects to that crime.



ShotSpotter is an example of the way police forces are diversifying the type of data they gather. The company, which is based in Newark, California, focuses on gun crime and places sensors across cities to analyse the sound of gun shots. The company describe themselves as combining “wide-area acoustic surveillance with centralized cloud-based analysis”.

The sounds that are captured are analysed (i.e. Was it an actual gun shot or instead fireworks? What type of gun was shot? Where was the person shooting from? How many people where shooting?) and relayed to the police. ShotSpotter illustrates a classic problem of smart cities: in order to provide a solution – in this case reducing gun crime – infrastructure is built and deployed that is potentially extremely intrusive for people living in the city. Indeed, how can we ensure that sensors will only capture the sound of gun shots?



From CCTV cameras, to sensors in cities, and our social media accounts, the police now have access to unprecedented amount of intelligence on every citizen, not only those who have been arrested. The business of companies that sell integrated policing platforms is to facilitate the processing of this intelligence, and therefore encourage the race for collecting more and more data. These new surveillance powers have not come with more regulation. Protecting citizens should done with adequate safeguards for the right to privacy and with the presumption of innocence. 

In a data-driven world, the police have access to ever more complex and diverse sources of data, many of which are either about individuals or can be used to identify individuals. 

The risks inherent to mass data collection and data exploitation highlighted above demonstrate why police should create and adhere to strict policies which limit the extent to which they can collect, analyse, and use our data. Adopting technology before sufficient regulation is in place undermines the public’s expectation of policing by consent.

Data is part of our identity and even non-personal data can reveal intimate details about us and our lives. Therefore, to ensure that we live in a society where people are all treated as citizens and not as suspects, the data police collect should be necessary and proportionate and only be stored when strictly necessary. Data processing technology should not be considered a one-size fit all solution and should be considered with scepticism before adoption. We should be able to easily demand access to data the police hold about us, including inferences made from our data.  We should also be able to demand that data held about us is corrected when it is inaccurate. The unique nature of publicly available social media content should be taken into consideration and generally the shifting nature of information in the public domain should not undermine the right to privacy in the public domain.