IBM has announced it will no longer build facial recognition amid growing concern over the technology’s implications.
In a letter to Congress, IBM CEO Arvind Krishna wrote:
“IBM firmly opposes and will not condone uses of any [facial recognition] technology, including facial recognition technology offered by other vendors, for mass surveillance, racial profiling, violations of basic human rights and freedoms, or any purpose which is not consistent with our values and Principles of Trust and Transparency.
We believe now is the time to begin a national dialogue on whether and how facial recognition technology should be employed by domestic law enforcement agencies.”
Facial recognition technologies have been found to have racial biases. A study in 2010 by researchers from NIST and the University of Texas in Dallas found that algorithms designed and tested in East Asia are better at recognising East Asians, while those designed in Western countries are more accurate at detecting Caucasians.
Research from the American Civil Liberties Union found that when Amazon’s facial recognition tool was used to compare pictures of all members of the House and Senate against 25,000 arrest photos, the false matches disproportionately affected members of the Congressional Black Caucus.
Last month, the ACLU filed a lawsuit against controversial facial recognition firm Clearview AI – calling it a ‘nightmare scenario’ for privacy. The company, which has had extensive ties to the far-right, has repeatedly come under fire due to its practice of scraping billions of photos from across the internet.
In the UK, the Equalities and Human Rights Commission (EHRC) called for the public use of facial recognition to be halted after trials have been nothing short of a complete failure. An initial trial by the Met Police, at the 2016 Notting Hill Carnival, led to not a single person being identified. A follow-up trial the following year led to no legitimate matches but 35 false positives.
An independent report into the Met Police’s facial recognition trials, conducted by Professor Peter Fussey and Dr Daragh Murray last year, concluded that it was only verifiably accurate in just 19 percent of cases.
IBM has made clear it wants nothing to do with developing any technology which could be used for mass surveillance, especially when it continues to have serious accuracy problems which may lead to automated racial profiling. Krishna has called for wider policy reforms including policing, responsible use of technology, and the broadening of skills and educational opportunities.
11/06 update – Amazon has issued a statement claiming it will no longer provide facial recognition technology to law enforcement for at least a year and calls on Congress for stronger regulations:
“We’re implementing a one-year moratorium on police use of Amazon’s facial recognition technology. We will continue to allow organizations like Thorn, the International Center for Missing and Exploited Children, and Marinus Analytics to use Amazon Rekognition to help rescue human trafficking victims and reunite missing children with their families.
We’ve advocated that governments should put in place stronger regulations to govern the ethical use of facial recognition technology, and in recent days, Congress appears ready to take on this challenge. We hope this one-year moratorium might give Congress enough time to implement appropriate rules, and we stand ready to help if requested.”
Interested in hearing industry leaders discuss subjects like this? Attend the co-located 5G Expo, IoT Tech Expo, Blockchain Expo, AI & Big Data Expo, and Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London, and Amsterdam.