Police face calls to end use of facial recognition software | Technology

Police are facing calls to halt the use of facial recognition software to search for suspected criminals in public after independent analysis found matches were only correct in a fifth of cases and the system was likely to break human rights laws.

Academics from the University of Essex were granted access to six live trials by the Metropolitan police in Soho, Romford and at the Westfield shopping centre in Stratford, east London.

They found the system regularly misidentified people who were then wrongly stopped. They also warned of “surveillance creep”, with the technology being used to find people who were not wanted by the courts. And they warned it was unlikely to be justifiable under human rights law, which protects privacy, freedom of expression and the right to protest.

Similar facial scanning software is being used in shopping centres, where it is embedded in advertising hoardings to track the shoppers’ age, gender and even mood, and has been deployed by other police forces in Manchester, Leicester and South Wales – where it will be used this weekend at the Swansea airshow. Officers will be scanning for “persons of interest” and “other persons where intelligence is required” as well as wanted criminals, the force said.

What is facial recognition software?

Automated facial recognition (AFR) is technology that can identify people by analysing and comparing facial features to those held in a database.

Where is it used?

You might recognise it from auto-tagging of pictures on Facebook or on your phone, but it is increasingly being used out in the real world.

Shoppers at retail parks like Westfield, for example, are routinely scanned and recorded by dozens of hidden cameras built into the centres’ digital advertising billboards. The cameras can determine not only your age and gender ,but your mood, cueing up tailored advertisements within seconds, thanks to facial detection technology.

Police have also used the technology to scan crowds at events and demonstrations to identify ‘people of interest’.

What are the concerns about it?

In the UK a court action claims that South Wales police violated privacy and data protection rights by using facial recognition technology on individuals. The police force defended their actions saying that  AFR is similar to the use of DNA to solve crimes and would have little impact on those who were not suspects. 

The UK’s biometrics commissioner has warned that police forces are pushing ahead with the use of AFR systems in the absence of clear laws on whether, when or how the technology should be employed.

The pressure group Liberty has denounced AFR as ‘arsenic in the water supply of democracy’, and the city of San Francisco has already barred the use of automatic facial recognition by law enforcement.

A crucial argument police’s deployment of the technology is that it doesn’t yet work very well. It is especially inaccurate and prone to bias when used against people of colour: a test of Amazon’s facial recognition software found that it falsely identified 28 members of US Congress as known criminals, with members of the Congressional Black Caucus disproportionately represented.

David Davis MP, a former shadow home secretary, said the research by Prof Peter Fussey and Dr Daragh Murray at the University of Essex’s Human Rights Centre showed the technology “could lead to miscarriages of justice and wrongful arrests” and poses “massive issues for democracy”.

“All experiments like this should now be suspended until we have a proper chance to debate this and establish some laws and regulations,” he said. “Remember what these rights are: freedom of association and freedom to protest; rights which we have assumed for centuries which shouldn’t be intruded upon without a good reason.”

The Neoface system used by the Met and South Wales police is supplied by Japanese company NEC, which markets the same technology to retailers and casinos to spot regular customers, and to stadium and concert operators to scan crowds for “potential troublemakers”.

Scotland Yard insisted its deployments were legal and successful in identifying wanted offenders, and that the public would expect it to trial emerging technology.

Deputy assistant commissioner Duncan Ball said the force was “extremely disappointed with the negative and unbalanced tone of this report”.

The study will increase pressure on ministers to legislate to define how facial recognition can be used in policing and in the private sector. Its use by South Wales police is currently under judicial review, while the information commissioner, Elizabeth Denham, has criticised “a lack of transparency about its use” and Tony Porter, the surveillance camera commissioner, last year intervened to stop Greater Manchester police using facial recognition at the Trafford shopping centre.

The Home Office said it believed there was an adequate legal framework for its use and it supported police trials, but added it was reviewing ways to simplify and extend governance and oversight of biometrics.

Scotland Yard granted the University of Essex academics access to six deployments of the system between June 2018 and February 2019. It uses cameras fixed to posts or on a van, and software cross-checks face-scans of passersby against a “watchlist”.

The research found that police were too hasty to stop people before matches could be properly checked, which led to mistakes; watchlists were sometimes out of date and included people wanted by the courts as well as those considered “at risk or vulnerable”; and officers viewed the technology as a way of detecting and deterring crime, which the report argued could have been achieved without biometric technology. They said it was “highly possible” Scotland Yard’s use would be ruled unlawful if challenged in court.

“While we focused on the police, by far the greater use is in the private sphere,” said Professor Fussey. “There’s a lack of any national leadership on this issue of facial recognition. Human rights standards should be embedded from the start in the use of technology.”

Of 42 people flagged up during the Met’s trials, 22 people were stopped, but of those only eight were being sought – some of whom were wanted for serious violent crime. Some were stopped for a crime the courts had already dealt with, but were arrested for a more minor offence that would not normally be considered serious enough to be tackled using facial recognition.

The Essex researchers also raised concern about potential bias, citing US research in 2018 into facial recognition software provided by IBM, Microsoft and Face++, a China-based company, which found the programmes were most likely to wrongly identify dark-skinned women and most likely to correctly identify light-skinned men.

In future, facial recognition software could screen images from body worn cameras and existing CCTV cameras, they said. This could allow a record of an individual’s movements, which could be analysed automatically to identify “unusual patterns of movement, participation at specific events, or meetings with particular people”.

Liberty, the civil rights campaign group, has previously called facial recognition “a dangerously intrusive and discriminatory technology that destroys our privacy rights and forces people to change their behaviour”.

Source link