Three Democratic House lawmakers on Thursday introduced legislation that would ban the use of facial recognition technology in federally-funded public housing.
Though narrowly targeted, the bill comes amid a wave of broader public and legislative pushback against automated surveillance and data gathering. San Francisco, Oakland, Calif. and Somerville, Mass. have all banned city agencies, including law enforcement, from using facial-recognition technology, while California and Michigan lawmakers have proposed similar moratoriums.
The technology, supported by police, uses artificial intelligence to compare photos or videos to those in existing databases. It’s starting to be used in housing projects to provide keyless entry to residents and at airports to screen passengers.
But critics say the technology is intrusive and prone to mistakes. Rep. Yvette D. Clarke (NY), one of the House bill’s sponsors, emphasized the racial dimension, saying that facial recognition technology may be unreliable when identifying people of color. The bill was motivated in part by the recent Atlantic Plaza Towers case, in which tenants of a rent-stabilized development in Clarke’s district filed suit to block the installation of facial recognition technology there.
Cosponsors Rep. Ayanna Pressley (MA), and Rep. Rashida Tlaib (MI) make up half of the so-called “Squad,” an informal coalition of progressive female lawmakers who have become targets of ire from both conservatives and moderate Democrats. That would seem to make it unlikely for the bill to become law, and its emphasis on racial equity could make it particularly anathema to the Trump administration.
But there has been significant bipartisan skepticism about facial recognition technology. During a House Oversight and Reform Committee hearing in May, for instance, Republican Rep. Jim Jordan (OH) cited George Orwell’s 1984 while suggesting facial recognition tech could be a threat to Americans’ rights to free speech and privacy.
This rising wave of opposition is causing concern among the nation’s law-enforcement establishment. At a July 24 panel hosted by the Information Technology Innovation Foundation, policing experts argued that the technology is an important public safety tool partly because it eliminates the labor-intensive process of screening suspect photos by hand, freeing officers to focus on other investigative work.
Advocates also argued that, if used carefully, facial recognition would not infringe on citizens’ rights. “The outcry seems to be for day-to-day, routine collection where the person is not expecting it,” said Eddie Reyes, director of the National Police Foundation. Reyes emphasized that, in fact, facial recognition was primarily useful for generating leads for further hands-on investigation, and that it is not “permissible or appropriate to make an arrest just on the basis of facial recognition alerting.”
But critics say there are many reasons to push back on facial recognition technology, at least for the time being—and especially in settings such as public housing. In the absence of clear laws on the collection, storage, use, and sharing of facial recognition data, it “could be used to call into question someone’s sexuality, disability status, or political affiliations,” said Dr. Chris Gilliard, who teaches at Macomb Community College near Detroit and who filed an amicus brief in the Atlantic Plaza Towers suit. “These are not fanciful speculations. Police departments are well-known for surveilling communities based on those things.”
Broader distrust of law enforcement seems to be a major factor in opposition to facial recognition technology. “The policing system itself has some biases that need to be worked out, and it will be exacerbated if you add these technologies to that,” said Tawana Petty, director of data justice programming for the Detroit Community Technology Project.
Detroit, much of which is in Rep. Tlaib’s district, has had a particularly fraught relationship with surveillance technology. The city’s Project Greenlight includes hundreds of cameras throughout the city that send real-time video to the city’s police department. Project Greenlight was recently found to have been secretly using facial recognition technology for two years.
Possible racial bias built into facial-recognition algorithms is another serious concern. The National Institute of Standards and Technology has measured error rates for facial recognition technology at 1% or lower, much lower than those for human eyewitnesses. But the NIST tests use portrait-style photos, and accuracy can vary significantly based on factors like lighting and distance. Several trials have also found higher overall error rates for people of color and women, presenting the possibility of false positives. One culprit is likely the use of limited or flawed data with the systems, an issue that impacts many forms of automated data processing.
Even some companies that would benefit from selling the technology agree with those concerns. Late last month, Axon, a major manufacturer of police body cameras, said it would bar the use of facial recognition technology with its devices. The company’s ethics board found the technology is not yet accurate enough.
Critics are also skeptical of the key claim of advocates – that facial recognition and other surveillance technology increases public safety. Such claims have been strongly advanced by some commercial surveillance vendors, including the video doorbell-maker Ring, now owned by Amazon. Competing studies have cast serious doubt on Ring’s past claims that its devices reduce crime in communities where they are widely installed.
Other approaches are far more proven, said Petty. “We know the things that create safety,” she said. “If you have a grocery store in a neighborhood, and citizens have jobs, and the school is not closed, and the neighborhood doesn’t have lead-poisoned water, those are the things that create safety.”
More must-read stories from Fortune:
—How the government should spend Facebook’s $5 billion fine
—Cloud gaming is big tech’s new street fight
—Should companies bolster their cybersecurity by “hacking back”?
—FaceApp’s Russia link is the latest alarm in an ongoing digital red scare
—Equifax may owe you some money. Here’s how to get it
Catch up with Data Sheet, Fortune‘s daily digest on the business of tech.