Amazon will stop police from using its facial recognition technology for a year while it waits for federal regulation of the surveillance tool, the company said in a blog post Wednesday.

After years of criticism from privacy activists, the company will stop law enforcement from using its Rekognition tool to give Congress time to develop regulations.

The announcement follows years of pressure from police reform advocates and privacy activists, including the American Civil Liberties Union, to stop marketing its facial recognition tool to police over concerns that it is racially biased and can be used to build an oppressive system to automate the identification and tracking of anyone.

“Face recognition technology gives governments the unprecedented power to spy on us wherever we go. It fuels police abuse,” said Nicole Ozer, technology and civil liberties director of the ACLU of Northern California, responding to Amazon’s announcement. “This surveillance technology must be stopped.”

Amazon said it has advocated for governments to put in place “stronger regulations to govern the ethical use of facial recognition technology,” noting that Congress “appears to be ready to take on this challenge.”

“We hope this one-year moratorium might give Congress enough time to implement appropriate rules, and we stand ready to help if requested,” the company said.

Liz O’Sullivan, a privacy activist and founder or Arthur AI, described the announcement as a “victory for activists and academics” who have been pushing for stricter regulation of facial recognition for years. But she noted that it was an “admission that the entire system of surveillance is flawed, biased and has racial implications.”

“We need to make sure this moratorium turns into a permanent ban,” she said, calling on activists and members of the public to apply pressure to their local policymakers to ensure any regulation “serves the people versus corporate interests.”

The one-year moratorium on police use of Amazon Rekognition does not include organizations that work closely with law enforcement to identify victims of child sexual exploitation and human trafficking, such as nonprofit Thorn, the National Center for Missing and Exploited Children and Marinus Analytics.

In July 2018, the ACLU conducted a test of Amazon Rekognition and found it incorrectly matched 28 members of Congress, identifying them as other people who had been arrested for a crime.

At the time, Amazon said the ACLU had set the “confidence rate” in the system lower than the recommended level, leading to a higher number of false positives.

“Machine learning is a very valuable tool to help law enforcement agencies, and while being concerned it’s applied correctly, we should not throw away the oven because the temperature could be set wrong and burn the pizza,” said the company in a July 2018 statement.

The announcement follows a similar pledge from IBM on Monday, when the company’s CEO Arvind Krishna wrote a letter to Congress stating it would no longer develop or research facial recognition technology.

Krishna said the company “firmly opposes” the use of facial recognition technology for “mass surveillance, racial profiling, violations of basic human rights and freedoms.”

“We believe now is the time to begin a national dialogue on whether and how facial recognition technology should be employed by domestic law enforcement agencies,” he said.

Originally Publish at: