News  |  ,   |  March 26, 2019

On Recent Research Auditing Commercial Facial Analysis Technology

News article by Concerned Researchers.
Published on Medium.

Excerpt:

Over the past few months, there has been increased public concern over the accuracy and use of new face recognition systems. A recent study conducted by Inioluwa Deborah Raji and Joy Buolamwini, published at the AAAI/ACM conference on Artificial Intelligence, Ethics, and Society, found that the version of Amazon’s Rekognition tool which was available on August 2018, has much higher error rates while classifying the gender of darker skinned women than lighter skinned men (31% vs. 0%). In response, two Amazon officials, Matthew Wood and Michael Punke, wrote a series of blog posts attempting to refute the results of the study. In this piece we highlight several important facts reinforcing the importance of the study and discussing the manner in which Wood and Punke’s blog posts misrepresented the technical details for the work and the state-of-the-art in facial analysis and face recognition.

  1. There is an indirect or direct relationship between modern facial analysis and face recognition (depending on the approach). So in contrast to Dr. Wood’s claims, bias found in one system is cause for concern in the other, particularly in use cases that could severely impact people’s lives, such as law enforcement applications.
  2. Raji and Buolamwini’s study was conducted within the context of Rekognition’s use. This means using an API that was publicly available at the time of the study, considering the societal context under which it was being used (law enforcement), and the amount of documentation, standards and regulation in place at the time of use.
  3. The data used in the study can be obtained through a request to https://www.ajlunited.org/gender-shades for non commercial uses, and has been replicated by many companies based on the details provided in the paper available at http://gendershades.org/.
  4. There are no laws or required standards to ensure that Rekognition is used in a manner that does not infringe on civil liberties.

We call on Amazon to stop selling Rekognition to law enforcement. [ . . . ]


Signed by Concerned Researchers

  1. Ali Alkhatib, Stanford University
  2. Noura Al Moubayed, Durham University
  3. Miguel Alonso Jr, Florida International University
  4. Anima Anandkumar, Caltech (formerly Principal Scientist at AWS)
  5. Akilesh Badrinaaraayanan, MILA/University of Montreal
  6. Esube Bekele, National Research Council fellow
  7. Yoshua Bengio, MILA/University of Montreal
  8. Alex Berg, UNC Chapel Hill
  9. Miles Brundage, OpenAI; Oxford; Axon AI Ethics Board
  10. Dan Calacci, Massachusetts Institute of Technology
  11. Pablo Samuel Castro, Google
  12. Stayce Cavanaugh, Google
  13. Abir Das, IIT Kharagpur
  14. Hal Daumé III, Microsoft Research and University of Maryland
  15. Maria De-Arteaga, Carnegie Mellon University
  16. Mostafa Dehghani, University of Amsterdam
  17. Emily Denton, Google
  18. Lucio Dery, Facebook AI Research
  19. Priya Donti, Carnegie Mellon University
  20. Hamid Eghbal-zadeh, Johannes Kepler University Linz
  21. El Mahdi El Mhamdi, Ecole Polytechnique Fédérale de Lausanne
  22. Paul Feigelfeld, IFK Vienna, Strelka Institute
  23. Jessica Finocchiaro, University of Colorado Boulder
  24. Andrea Frome, Google
  25. Field Garthwaite, IRIS.TV
  26. Timnit Gebru, Google
  27. Sebastian Gehrmann, Harvard University
  28. Oguzhan Gencoglu, Top Data Science
  29. Marzyeh Ghassemi, University of Toronto, Vector Institute
  30. Georgia Gkioxari, Facebook AI Research
  31. Alvin Grissom II, Ursinus College
  32. Sergio Guadarrama, Google
  33. Alex Hanna, Google
  34. Bernease Herman, University of Washington
  35. William Isaac, Deep Mind
  36. Phillip Isola, Massachusetts Institute of Technology
  37. Alexia Jolicoeur-Martineau, MILA/University of Montreal
  38. Yannis Kalantidis, Facebook AI
  39. Khimya Khetarpal, MILA/McGill University
  40. Michael Kim, Stanford University
  41. Morgan Klaus Scheuerman, University of Colorado Boulder
  42. Hugo Larochelle, Google/MILA
  43. Erik Learned-Miller, UMass Amherst
  44. Xing Han Lu, McGill University
  45. Kristian Lum, Human Rights Data Analysis Group
  46. Michael Madaio, Carnegie Mellon University
  47. Tegan Maharaj, Mila/École Polytechnique
  48. João Martins, Carnegie Mellon University
  49. El Mahdi El Mhamdi, Ecole Polytechnique Fédérale de Lausanne
  50. Vincent Michalski, MILA/University of Montreal
  51. Margaret Mitchell, Google
  52. Melanie Mitchell, Portland State University and Santa Fe Institute
  53. Ioannis Mitliagkas, MILA/University of Montreal
  54. Bhaskar Mitra, Microsoft and University College London
  55. Jamie Morgenstern, Georgia Institute of Technology
  56. Bikalpa Neupane, Pennsylvania State University, UP
  57. Ifeoma Nwogu, Rochester Institute of Technology
  58. Vicente Ordonez-Roman, University of Virginia
  59. Pedro O. Pinheiro
  60. Vinodkumar Prabhakaran, Google
  61. Parisa Rashidi, University of Florida
  62. Anna Rohrbach, UC Berkeley
  63. Daniel Roy, University of Toronto
  64. Negar Rostamzadeh
  65. Kate Saenko, Boston University
  66. Niloufar Salehi, UC Berkeley
  67. Anirban Santara, IIT Kharagpur (Google PhD Fellow)
  68. Brigit Schroeder, Intel AI Lab
  69. Laura Sevilla-Lara, University of Edinburgh
  70. Shagun Sodhani, MILA/University of Montreal
  71. Biplav Srivastava
  72. Luke Stark, Microsoft Research Montreal
  73. Rachel Thomas, fast.ai; University of San Francisco
  74. Briana Vecchione, Cornell University
  75. Toby Walsh, UNSW Sydney
  76. Serena Yeung, Harvard University
  77. Yassine Yousfi, Binghamton University
  78. Richard Zemel, Vector & University of Toronto

List retrieved February 29, 2020