Facial recognition challenged by global campaign calling for a ban on its use by law enforcement

Facial recognition will be subject to greater scrutiny with the launch of a global campaign seeking to ban the controversial technology.

Facial recognition challenged by global campaign calling for a ban on its use by law enforcement

By Rhiannon Williams

Human rights group Amnesty International has started the Ban the Scan campaign in New York, ahead of a wider roll out to other parts of the world later in 2021.

The i newsletter latest news and analysis

Campaigners have partnered with tech and civil rights groups including the Immigrant Defence Project, New York Civil Liberties Union and the New York City Public Advocate’s office to push for a ban on the city’s law enforcement from using facial recognition, following in the footsteps of other US cities Boston, Portland and San Francisco.

The group claimed that facial recognition reinforces systemic racism as a result of many systems’ failure to correctly identify minority ethnic groups, particularly black people, who they said were already subject to discrimination and violations of their human rights by law enforcement officials.

Unlawful use of the technology in the UK

The UK’s Court of Appeal upheld an appeal brought by former Liberal Democrat councillor Ed Bridges against South Wales police force’s use of the software in August last year, ruling it unlawful and claiming the force has not taken reasonable steps to determine whether the software had a gender or racial bias.  

“Facial recognition risks being weaponised by law enforcement against marginalised communities around the world. From New Delhi to New York, this invasive technology turns our identities against us and undermines human rights,” said Matt Mahmoudi, AI and Human Rights Researcher at Amnesty International.

London’s Metropolitan Police Force has been using live facial recognition signposted cameras on the streets of the capital since January 2020 in a bid to find wanted criminals more efficiently, despite repeated warnings from civil rights groups that the technology is not fit for purpose.

Microsoft and Amazon confirmed they would no longer sell their technologies to police in the US in June last year, in the wake of global Black Lives Matter protests highlighting the mistreatment of people of minority ethnic groups at the hands of police.

Both companies’ software has been criticised over their inability to recognise people with darker skin, particularly women, contributing to fears the technology could fuel racial profiling and surveillance.

Similar concerns have been raised over the Met’s software, which compares the faces of passers-by to a database of wanted persons, over both its potential to reinforce racial bias and its overall efficacy.

Originally published at I news