San Francisco on Tuesday became the first city in the United States to ban the use of facial recognition technology by police and other government agencies. Eight of the nine city council members voted in favor of the new regulation, which is yet to be subject to a procedural vote next week with little likelihood of it changing the situation.
This ban does not apply to airports or sites regulated by the federal authorities. ” The propensity for facial recognition technology to endanger civil rights and civil liberties significantly outweighs its so-called benefits ”, notes the decision, considering that this technology ” will exacerbate racial injustice and threaten our ability to live free from ongoing government surveillance ”.
“The greatest danger is that this technology is used for surveillance systems”
This ban is part of a broader regulation on the use of surveillance systems and the audit of surveillance policies, with stricter conditions and the need for prior board approval for agencies. municipal authorities regarding such systems. A similar ban is being considered in Oakland, across the bay from San Francisco. Facial recognition surveillance has raised concerns among some that innocent people may be mistakenly identified as offenders and that these systems may invade daily privacy.
But others believe the technology can help police fight crime and make streets safer. The arrest of criminals by the police was credited to her, but she is also responsible for misidentifications. ” Facial recognition can be used for general surveillance in conjunction with public video cameras and can be used in a passive manner that does not require the knowledge, consent or buy-in of the person concerned ”, noted the Association for the Defense of Civil Rights (ACLU) on its website. ” The greatest danger is that this technology is used for conventional surveillance systems and above all suspicion ”, she continued.
According to New York Times, Chinese authorities are using this technology integrated with the country’s vast surveillance camera networks to spot members of the Uyghur Muslim minority, after having been programmed with their physical characteristics to look only for them. It would be the first known example of a government’s use of artificial intelligence for racial profiling.