Because very few laws or public policies guide the use of facial recognition software, mass surveillance using this technology endangers the right of privacy in general, and racial justice in particular. Pittsburgh should ban the use of facial recognition by law enforcement, at least until federal law eliminates the privacy risks, and the accuracy of the systems, across racial lines, can be assured.
The most famous facial recognition company, Clearview AI, has a database of several billion images of people scraped from public records, social media accounts and so on. Over 3,000 American law enforcement agencies at all levels use Clearview’s data and software.
We can’t avoid being on camera and having our images loaded into these databases. The proliferation of private cameras in commercial districts and residential neighborhoods supplies them with even more images. And according to a University of Nevada study, over 1,000 police departments deploy drones to monitor their communities. If you leave your home, you will be on film, and can be identified.
This seems clearly to violate our right to privacy, as Americans understand it. People should be able to go to the grocery store, park, or library, or to meet with friends, without it being recorded.
But there are few rules about how these emerging technologies can and can’t be used. Case law interpreting the limitations placed on facial recognition by the Fourth Amendment is minimal. No federal privacy law defines what private companies can collect, and what they can do with it. That’s why Clearview considers the applications of its technology across the public and private sectors to be “limitless.”
If you think it will only be deployed against “bad guys” who don’t look like you, you are mistaken. For example, nothing now stops private firms and public agencies from identifying participants in political events. Imagine the Justice Department compiling a list of people seen at Doug Mastriano rallies as “extremist” security threats, or a Governor Mastriano directing the state police to do with same with people protesting his administration. The possibilities for abuse are, as Clearview so giddily indicates, “limitless.”
But even if this technology were deployed only for good and decent purposes, it still has a racial bias problem. According to a Brookings Institution report, a 2018 test of commercial algorithms showed a tiny error rate when identifying white men — but an over 20% error rate when applied to women with darker skin. While the software has improved, Clearview and others are still plagued by lower accuracy when identifying people of color.
In other words, facial recognition technology makes it more likely people of color will be identified, arrested and convicted for crimes they did not commit. Being falsely identified, even if later cleared, will be a damaging and traumatic experience.
Again, no federal or state law deals with this. That’s why Pittsburgh should apply limits of its own. Until appropriate safeguards are in place, and racial biases corrected — and maybe beyond — facial recognition should not be allowed in Pittsburgh.
First Published: September 28, 2022, 4:00 a.m.