A group of leading AI researchers from industry and academia have signed an open letter calling on Amazon to “stop selling Rekognition to law enforcement”.
The letter, published by around 45 ‘concerned researchers’ late last month, follows a study by Joy Buolamwini from MIT and Deborah Raji from the University of Toronto, which found Amazon Rekognition “exhibits gender and racial bias for gender classification”.
According to their work, Rekognition has much higher error rates while classifying the gender of darker skinned women than lighter skinned men (31 per cent compared with zero per cent).
Amazon Web Services hit back at the research in two blog posts in which the company called the work “misleading” and containing “false conclusions” and “several misperceptions and inaccuracies”. The company also said that “a significant set of improvements” to Rekognition had been made in November last year, and the study was using an “outdated version” of the product.
In their open letter, the concerned researchers hit back at the criticisms laid out in the blog posts – written by AWS head of global public policy Michael Punke, and general manager of AI Dr Matt Wood – with the retort that the posts “misrepresented the technical details for the work”.
The letter is signed by experts from academia and companies including Google, DeepMind, Facebook, Intel and Microsoft. Among the signatories is California Institute of Technology professor Anima Anandkumar, formerly principal scientist on deep learning at AWS.
“Overall, we find Dr. Wood and Mr. Punke’s response to the peer-reviewed research findings disappointing. We hope that the company will instead thoroughly examine all of its products and question whether they should currently be used by police,” the letter states.
The study built on Buolamwini’s Gender Shades project, which last year demonstrated bias in AI systems from Microsoft, IBM, and Face++. Since then IBM and Microsoft have “greatly improved” their facial recognition offerings using similar datasets to Gender Shades, leading to what Buolamwini said “substantial industry change”.
Earlier this year, IBM released a huge and diverse dataset of face images in a bid, the company said, to “advance the study of fairness and accuracy in facial recognition technology”.
Microsoft also responded, last year committing to making services available for “independent testing to conduct reasonable tests of our facial recognition technology for accuracy and unfair bias”.
In his January blog post, AWS’ Wood concluded: “The answer to anxieties over new technology is not to run ‘tests’ inconsistent with how the service is designed to be used, and to amplify the test’s false and misleading conclusions through the news media.”
Facing the law
Central to the researchers’ concerns, is that Rekognition is being used by law enforcement agencies. In marketing materials, AWS has said Rekognition is used by the Washington County Sheriff’s Office and the City of Orlando, Florida.
Last year the company reportedly met with US Immigration and Customs Enforcement officials to promote the product.
“Amazon shouldn’t be arming an out-of-control agency with additional means for targeting immigrants. And if the government is planning to use this powerful surveillance tool, the public has a right to know how,” the American Civil Liberties Union said at the time.
AWS says that it recommends in Rekognition’s documentation that “facial recognition results should only be used in law enforcement when the results have confidence levels of at least 99 per cent, and even then, only as one artifact of many in a human-driven decision”. It claims that since Rekognition’s launch it has received “no reported law enforcement misuses”.