Can facial recognition technology really reveal political orientation? | John Naughton | Opinion


Three things about the research paper stopped me in my tracks. The first was the title: Facial recognition technology can expose political orientation from naturalistic facial images. The second was that the author was Michal Kosinski, someone who used to be at Cambridge University, is now at Stanford and whose work I’ve followed for years. And the third was that it was published in Scientific Reports, one of the journals published by the Nature group and definitely not an outlet for nonsense.

The paper reports a research project that suggests that facial recognition technology can accurately infer individuals’ political orientation in terms of whether they have liberal or conservative views. A common (and open-source) facial recognition algorithm was applied to images of more than a million individuals drawn from their entries on Facebook or dating websites to predict their political orientation by comparing their similarity to faces of liberal and conservative others. Political orientation was correctly classified in 72% of liberal–conservative face pairs, which is significantly better than chance (50%), human accuracy (55%) or even one provided by a 100-item personality questionnaire (66%).

For those of us who regard facial recognition technology as only marginally less toxic than plutonium, this looks like a significant finding. But I also expected it to be controversial, if only for the reason that almost everything Professor Kosinski does causes storms. In 2013, for example, a paper by him, David Stilwell and Thore Graepel revealed the astonishing granularity of personal information that could be inferred simply by a study of people’s Facebook “likes”. That information included sexual orientation, ethnicity, religious and political views, personality traits, intelligence, happiness, use of addictive substances, parental separation, age and gender. All just from Facebook “likes”.

Then Kosinski moved on from “likes” to studying the capacity of machine learning algorithms. In 2018, he and Yilun Wang reported research showing that deep neural networks are more accurate than humans at detecting sexual orientation from facial images. And now here we are with an algorithm that appears to be good at inferring political views from people’s faces.

Public and professional responses to Kosinski’s work span a spectrum from incredulity and outrage at one extreme to methodological criticism, concerns about discrimination and scepticism at the other. Already, reactions to the Nature paper are running true to form. The conclusions are “outlandish” fumed the Business Telegraph. “Taken as a whole,” it said, Kosinski’s work “embraces the pseudoscientific concept of physiognomy or the idea that a person’s character or personality can be assessed from their appearance.” It also seemed to remind the Business Telegraph of “phrenology, a related field, involving the measurement of bumps on the skull to predict mental traits”.

On the methodological front, Kosinski’s research has attracted formidable critics such as Alexander Todorov, a University of Chicago Booth School of Business professor and an expert on how people perceive, evaluate and make sense of the social world. In a celebrated critique, for example, he claimed that a study he conducted had “shown how the obvious differences between lesbian or gay and straight faces in selfies relate to grooming, presentation and lifestyle – that is, differences in culture, not in facial structure”. It’s nothing to do with facial topology, in other words, but with the way people present themselves on social media.

Many of the most impassioned objections to Kosinski’s work come from members of social groups who (rightly) fear that facial recognition technology merely reinforces and legitimises the gender, sexual and ethnic biases that are endemic in our societies. There is also the fear that, as we see from China’s ruthless exploitation of the technology, it can lead to discrimination, exclusion and perhaps even genocide. The argument is that, by publishing research that appears to lend credence to the tech industry’s claims for facial recognition, Kosinski is effectively making it respectable in corporate and authoritarian circles.

Kosinski is acutely aware of these critiques but pushes back. “We were really disturbed by these results,” he and his co-author wrote about their 2018 paper, “and spent much time considering whether they should be made public at all. We did not want to enable the very risks that we are warning against. The ability to control when and to whom to reveal one’s sexual orientation is crucial not only for one’s wellbeing, but also for one’s safety. We felt that there is an urgent need to make policymakers and LGBTQ communities aware of the risks that they are facing. We did not create a privacy-invading tool, but, rather, showed that basic and widely used methods pose serious privacy threats.”

What it comes down to, in a way, is a new manifestation of an ancient dilemma: do we shoot the messenger or attend to his disturbing message? Given the toxicity of facial recognition technology, I don’t think we have a choice. The message is what matters.

What I’m reading

Nasty is not Nazi
Why Trump Isn’t a Fascist is a thoughtful New Statesman essay by Richard J Evans that challenges lazy stereotypes about the ex-president.

Broadening the subject
Mary Catherine Bateson, cultural anthropologist and one of the 20th century’s great polymaths, died peacefully on 2 January. John Brockman has assembled a lovely tribute to her on the edge.org site.

An A to Z of AI issues
I was fascinated by Jeff Dean’s account on the Google AI Blog of the research currently being done by his researchers, from the spread of Covid to the climate crisis.


Source link