Mapping algorithms in the justice system

Complex algorithms are used to help public bodies manage and understand the vast quantities of information they hold about people, places and events.

The need for transparency

Technology and algorithms can be beneficial in saving time and resources. However, their use has grown quickly without regulation or full understanding of the consequences.

We know little about how and when they’re used in the justice system, and it’s possible they could infringe human and civil rights.

We’re calling on government and public bodies to be transparent and take responsibility for how algorithms are being used by police, prison authorities and border forces. Human rights and equality must be central to the design, development and deployment of any such system.

For more information, read our report on the use of algorithms in the criminal justice system.

This technology scans faces in public spaces and at events such as concerts or sports matches. It’s used to identify people who are on a watch list. They may be suspected criminals or vulnerable people.

Facial recognition technology raises issues of privacy and consent. The use of this technology must operate within the rule of law with its lawful basis explicitly defined and publicly available.

Listen to the podcast: Facial recognition technology - Who is watching us?

These algorithms use historical crime data to predict where future crimes might occur. They help police forces make decisions on a wide variety of issues such as:

  • where to send an officer on patrol
  • who’s at risk of being a victim of domestic violence
  • possible perpetrators of domestic violence
  • who to pick out of a crowd to prevent offending
  • who to let out on parole

Concerns over predictive policing include:

  • historic police data not always providing an accurate picture of crime in an area – for example, there may be crimes which go unreported, or areas which are historically over-policed
  • algorithms easily becoming biased – for example, those from a mixed ethnic minority background are twice as likely to be arrested, so algorithms built on this data risk inheriting these biases

Individual risk assessment algorithms are used to predict people’s behaviour and circumstances to determine who’s likely to commit a crime or become a victim of one. Risk assessment tools are used to predict the likelihood of:

  • reoffending
  • being reported missing
  • being a gang member or a victim of gang violence
  • committing domestic violence or other serious crime
  • becoming a victim of serious crime or domestic violence

Using algorithms to assess victimhood has been controversial and some people are concerned that, much like predictive policing programmes, these algorithms contain inherent racial and social biases.

The gangs matrix was developed by the Metropolitan Police following the 2011 London riots. The matrix identifies:

  • gang members
  • those considered to be susceptible to joining gangs
  • potential victims of gang violence

The gangs matrix has received considerable criticism. The London Mayor's Office for Policing and Crime ordered reform in December 2018 after it was found to be potentially discriminatory. The Metropolitan Police state that they have taken steps to address the criticisms, but the gangs matrix continues to attract concerns over its compliance with equality and human rights standards.

This Ministry of Justice digital reporting tool sorts and analyses live data on prison inmates' conduct. It informs decisions about offender management, such as which prison or wing someone is placed in and what activities they can do.

The data includes information such as:

  • involvement in assaults or disorder
  • demographic information and location history
  • seizures of contraband such as drugs and mobile phones

New incidents are logged on the database shortly after they take place, which can result in new scores being generated regularly. Inmates' scores can change to take account of improvements or deterioration in their behaviour.

In 2017 the chief inspector of borders reported that since 2015, the UK Visas and Immigration Directorate has been developing and rolling out a ‘streaming tool’ to assess the risks attached to each visa application.

The streaming tool is fed with data of immigration abuses – for example, breaches of visa conditions after entry to the UK. Using a decision tree approach, the tool rates applications as one of three risk levels:

  • green – low risk: more likely to have positive attributes and evidence of compliance
  • amber – medium risk: limited evidence (or equally balanced evidence) of negative and positive attributes so potential for refusal
  • red – high risk: applications appearing to have a greater likelihood of refusal because of the individual’s circumstances

There’s little information about this tool in the public domain, raising concerns about transparency and in-built bias. It’s possible that, without proper frameworks and oversight, applicants from certain communities may be discriminated against.

Where algorithms are currently used in England and Wales

Our map below shows where complex algorithms are being used in the justice system.

The information we used came from a variety of sources, including:

 

There’s no centralised, publicly available information on this topic, so it’s unlikely our picture is complete. We’ll add more details as they come into the public domain.