Who do you think you are? Biometrics and the police
5 min read
The lack of oversight of the use of facial recognition technology by the police has drawn the attention of multiple human rights and political commentators. What is the problem and what should be the way forward? Guinevere Poncia, Political Consultant at Dods Monitoring takes a closer look.
Biometrics is an established aspect of policing in the UK. DNA profiling had its first breakthrough in Dawn Ashworth’s murder case thirty-three years ago. Even during Jack the Ripper’s murderous spree in the 1800s, policemen employed rudimentary techniques, such as photographing the eyes of one of his victims, Mary Kelly, in the hope that his face (as the final image she would have seen) was preserved on her retina. Since then, biometric technology has come on leaps and bounds, infiltrating every part of our day-to-day lives, from unlocking phones with fingerprints to automated self-service at e-passport gates. However, biometrics carry a minefield of risks related to human rights, data protection, and technological bias.
Facial recognition technology is used for identifying criminals on watch lists and is deployed in combatting potential terrorist threats at large public gatherings. However, images on watch lists can come from anywhere. A recent report slammed police forces in the US for relying on "garbage" data, such as poor forensic sketches and filtered social media images. The police in the UK have not ruled out sourcing them from social media accounts. By definition, poor data increases the likelihood of an innocent person being subject to questioning or custody.
There is currently no regulation on how facial recognition technology is used or how the data gathered is subsequently managed – a major oversight in the era of GDPR. On Tuesday this week, a case was heard against the use of automatic facial recognition technology (AFR) by South Wales Police. Ed Bridges, with the support of human rights organisation Liberty, brought the case after his biometric data was captured whilst walking around Cardiff. He claims that the use of AFR breaches human rights and a lack of regulation fails to protect the public. South Wales Police is the biggest user of AFR, however, the practice generated furore in London when a man was fined for disorderly conduct when he refused to show his face to software being trialled by the Metropolitan Police.
What makes these cases more concerning, however, is that human bias is hardwired into technological design. Facial recognition technology has been proven to discriminate against women and ethnic minorities because it is more likely to misidentify them, compounding concerns around unconscious racial and gender bias. An investigation by Big Brother Watch found the technology to be staggeringly inaccurate, with the Metropolitan Police misidentifying people 98% of the time. These ‘false-positive’ results meant that at Notting Hill Carnival in 2017, 95 people were wrongly misidentified as criminals.
On 1st May, Darren Jones stood up in Westminster Hall and declared “the rules in place for the use of facial recognition technology are non-existent”. He is one of many MPs speaking out against the un-consensual gathering of public biometric data by the police, and the storage of innocent people’s biometric data on police systems.
This practice prevails despite a 2012 High Court ruling that the retention of these images increased the “risk of stigmatisation of those entitled to the presumption of innocence”. The Commons Science and Technology Committee has also raised concerns about the police’s inability to effectively delete images from their systems, with Chair Norman Lamb accusing the police of making do with current systems, even if it results in the retention of innocent people’s data.
Information Commissioner Elizabeth Denham recently made the issue a "priority" for her office, and in a blog, commented that "there is a lack of transparency about its use and is a real risk that the public safety benefits derived from the use of FRT will not be gained if public trust is not addressed”.
Despite numerous calls to build public trust, the Government’s movement in this area has been slow. The Biometrics Strategy was subject to a five-year delay and given a cold reception, because it failed to suggest any legislative proposals for regulating the use of facial recognition technology. The Biometrics Commissioner heralded it “as the basis for a more informed public debate” on the future use of biometrics but pointed out that it lacked forward planning “as one would expect from a strategy”. Likewise, Lamb criticised it for failing to do justice to the serious ethical implications of retaining facial images and said the consultation “smacks of continuing to kick the can down the road”.
However, the Home Office and Department for Digital, Culture, Media and Sport have now announced that the Centre for Data Ethics and Innovation will partner with the Cabinet Office’s Race Disparity Unit for an investigation into potential for bias in algorithmic decision-making. The independent Biometrics and Forensics Ethics Group also recently published a report on ethical principles to guide police facial recognition trials.
Despite these moves, the Government will continue to face demands to tackle the unethical use of biometric data and must ensure that regulation effectively safeguards human rights and data privacy, whilst preventing bias and preserving public safety.
Want to read more? Click here to download the rest of the article and a free 1- month lookahead.
PoliticsHome Newsletters
Get the inside track on what MPs and Peers are talking about. Sign up to The House's morning email for the latest insight and reaction from Parliamentarians, policy-makers and organisations.