Why we should be very scared by the intrusive menace of facial recognition | John Naughton | Opinion


On 18 July, the House of Commons select committee on science and technology published an assessment of the work of the biometrics commissioner and the forensic science regulator. My guess is that most citizens have never heard of these two public servants, which is a pity because what they do is important for the maintenance of justice and the protection of liberty and human rights.

The current biometrics commissioner is Prof Paul Wiles. His role is to keep under review the retention and use by the police of biometric material. This used to be just about DNA samples and custody images, but digital technology promises to increase his workload significantly. “It is now seven years,” observes the Commons committee, “since the 2012 high court ruled that the indefinite retention of innocent people’s custody images was unlawful and yet the practice is continuing. A system was meant to have been put in place where any custody images were kept for six years and then reviewed. Custody images of unconvicted individuals at that point should be weeded and deleted.”

But they haven’t: photographs of innocent people remain on the police national database. Why does this matter? Basically because these images can form the basis of “watchlists” for automatic facial-recognition technology when used by police forces in public spaces. Ten years ago, this might not have been that much of a concern. But the explosive growth of real-time facial-recognition technology – and the current fascination of UK police authorities with it – means that it’s already become a scandal and could soon become a crisis. Several forces have been conducting live trials of the technology in public places. Commenting on these, the Commons report says that “there is growing evidence from respected, independent bodies that the ‘regulatory lacuna’ [ie legislative vacuum] surrounding the use of automatic facial recognition has called the legal basis of the trials into question. The government, however, seems to not realise or to concede that there is a problem.”

Facial recognition has become a runaway technology, partly because images provided a perfect test bed for machine-learning software and because the internet proved to be an inexhaustible source of images for training purposes. As a result, the technology has been effectively commoditised. It’s everywhere and it’s relatively cheap. Social media companies obviously love it (spot your friends in those stag- and hen-night party pics), but so too do more mundane organisations, which use it for access control, spotting potential shoplifters, recognising repeat customers, etc etc. Spooks love it. Authoritarian regimes adore it. And, of course, police forces are fascinated by it, not least because it provides them with “objective” grounds for stop and search.

There are, however, some problems with this corporate and authoritarian tool. One is that the technology itself is flaky, prone to errors, false positives and bias. More importantly, it is a pathologically intrusive, privacy-eroding technology that can be used for general surveillance in combination with public video cameras. And in those applications it doesn’t require the knowledge, consent or participation of the subject. It can – and will – be used to create general, suspicionless surveillance systems: Jeremy Bentham’s panopticon on steroids.

Imagine a public space – Trafalgar Square or Oxford Street, say – thronged with people and monitored by CCTV cameras linked to a facial-recognition machine. Every time a camera focuses on a face, superimposed on the person’s visage is his or her name, plus other information about them – nationality, age, visa status, educational qualifications, criminal convictions (if any), employment history, political party.

This is not science fiction. It’s possible and working now in some parts of the world, notably China. And it’s what has led some people to liken the technology to plutonium and others to call for an outright ban on it.

We now have two options for controlling this runaway technology. One is to treat it like plutonium and ban its use for civilian purposes. The other is to treat it like a radioactive isotope – which has important uses in medicine – and regulate it accordingly. Oddly enough, this is what Microsoft suggests, arguing for “a government initiative to regulate the proper use of facial-recognition technology, informed first by a bipartisan and expert commission”.

When one of the tech giants starts to argue for government regulation, then you know we have really got a problem.

What I’m reading

FaceApp to the facts
There’s a nice story in Wired about the FaceApp hoo-ha and its wider contradictions. Why are we more worried about a single Russian-authored app than we are about Facebook?

Data with destiny
Stanford University sociologist Michael Rosenfeld and colleagues have produced a fascinating study on how dating apps have “disintermediated” (ie replaced) friends and family as the links to future life partners.

Visionary stuff
“An image sensor isn’t a ‘camera’ that takes ‘photos’ – it’s a way to let computers see.” Read the thoughtful blog post by Benedict Evans on why computer vision is changing what machines can do.


Source link