The city of Minneapolis is poised to ban facial recognition software for police use, part of a growing movement to prohibit software known to have serious flaws identifying racial minorities and women.
The actions of Minneapolis police sparked a racial reckoning in the United States last summer when police officers knelt on the neck of George Floyd for more than eight minutes, leading to his death.
Chicago’s Black and Hispanic police use force less than white officers – study
Read more
“If we have cameras all over the city tracking in real time, and keeping a record in real time of where everybody goes, that feels dystopian to me and that feels like it’s open for abuse,” city council member Steve Fletcher, who supports the ordinance, told the Minneapolis Star Tribune.
A committee voted 12-0 in favor of the ban, advancing it to the city council. The matter will be considered by the council Friday.
A spokesperson for the Security Industry Association, a trade group representing companies which make such software, said the ordinance “strips away” a useful tool for law enforcement.
While the ban would stop Minneapolis police from using the software, it would not stop other local law enforcement who operate in the city, such as sheriffs, from using it. Nor would the ban apply to non-police uses.
Backlash against facial recognition software has grown steadily in the last year. A 2018 study by the American Civil Liberties Union fed images of members of Congress to Amazon’s Rekognition software – the software erroneously found 28 had been arrested for a crime, disproportionately people of color.
Both Amazon and Microsoft placed a temporary moratorium on police use of their facial recognition in summer 2020, notably, not long after Floyd’s death.
In January, human rights group Amnesty International said it is pursuing a total ban on the software, and called on New Yorkers to stand up against the city’s use of the technology.
“Facial recognition risks being weaponised by law enforcement against marginalised communities around the world,” said Matt Mahmoudi, artificial intelligence and human rights researcher at Amnesty. “From New Delhi to New York, this invasive technology turns our identities against us and undermines human rights.
Свежие комментарии