This month, WIRED UK drew our attention to a new crop of data showing that three quarters of police forces in England and Wales now have access to identity-verification biometric fingerprint scanners.

Known as ‘Strategic Mobile devices’, they are small electronic scanners that clip onto smartphones, scan fingerprints and compare them against 12 million biometric records held in two databases: IDENT1, fingerprints of people taken into custody, and IABS, fingerprints of foreign citizens which are recorded when they come into the UK.

The increase in usage signals part of a push from forces to increase efficiency and refine identity-checking processes, but researchers and civil liberties organisations have long since drawn attention to their propensity to perpetuate racial bias

With only nascent regulatory systems in place, and a lack of checks and balances, the ‘stop and scan’ system is vulnerable to abuse by police. Tech is currently evolving faster than regulation can keep up with and as the gap widens, the risk of improper handling increases. 

We know that stop and search orders overwhelming target people of colour, particularly black men. We now also know that the volume of stop and scans are significantly higher in communities of colour. As people of colour continue to be let down by our institutions, questions arise over the purpose of public technology.

Forces argue that fingerprint scanning contributes to the process of keeping us safe, but if it has the pernicious effect of sustaining racial bias, can we really consider it a universal public good? When does technology aiming to derive insights or inform decision-making usher in unlawful discrimination?

Tech is now ubiquitous to our way of life. Sectors like AI, once considered purely technical and the preserve of experts, have been propelled to mainstream understanding. We use this up-to-the-minute technology every day in ways we sometimes don’t consider, like unlocking our iPhones or using a payment app to buy a coffee. It was always inevitable that this tech would make its way to policing.

Police have made pains to de-escalate public reaction to biometrics usage by arguing that biometrics merely accelerate and simplify the process of suspect identification, and produce a more complete picture of crime rates. It’s also been suggested that austerity measures have necessitated the act of doing ‘more with less’. 

Where Live Facial Recognition (LFR) imagery can capture anybody walking past one of the stationed vans, with stop and scans, police have to suspect you’ve committed a crime, or suspect you are lying about your identity, hence the alignment with stop and search.  But as with stop and searches, the context matters: stop and scans can easily be used in situations of fear, mistrust and suspicion.

Born out of the 1981 Brixton riots, stop and searches are totemic of the hostility black men often shown from those in positions of power. Tensions are arguably getting worse, not better: Section 60 orders that dispel from the ‘reasonable suspicion’ clause led to five-fold increases of people being stopped and searched in London. 

Where stop and searches are invasive and oppressive, for civilians, stop and scans carry the added risk of improper handling of the tech, and an intangible audit trail. Unbelievably, most police forces do not record why a person is having their identity checked, and there is also no race-based data for fingerprint scans available for scrutiny.

During the first national lockdown of this year, scans across all police forces with access to the tech increased by 44% – 88% in London – even as crime dropped in this period.

Given that black Londoners are over 12 times more likely to be stopped and searched than white people, a rise in proactivity for stop and scan is both concerning and revealing. The supposed neutrality of biometric scanners is undermined if police can use the system whenever and on whoever they like without consequence, pointing to a worrying lean towards ‘algocratic’ governance, a system in which algorithms govern.

The lack of transparency within the process does nothing to build assurances with policing in this country, especially for communities who have a pre-existing lack of trust towards police.

To date, the processes to mitigate bias and inequality within biometric policing tend to focus on correcting the technology alone. IBM, the company which supplies the biometric tech used by forces in many countries, recently announced the launch of a new dataset specifically to develop ‘fair and accurate algorithms’. While this should be lauded to some extent, algorithmic developments must be accompanied by broader cognisance and behavioural shifts to have any meaning.

Biometrics can – and should – be made more representative and considerate of ethnic diversity, but if the tech continues to be operationalised in a way that racially-profiles, we’re stuck at an impasse. If no accountable and transparent framework around stop and scans can be mutually agreed, the scanners should have no place within policing.

There is clear impetus for widespread use of tech in our society. Tech greens processes and infrastructure, making our public services, like transport, more efficient and more economical. As we take greater strides towards a fully digital global society, we need to be prepared to confront the racism and inequality embedded in our infrastructure.

To be a person of colour today is to live with the devastating reality that many of the institutions that are supposed to protect you are in some way biased against you. Racism is insidious, its violence pervades through all structures: no one organisation, or police force, is exempt. If we are to bring about a socially-just post-pandemic future, then we need urgent collective action to create fair and just biometric technology.

Comments are closed.

%d bloggers like this: