Reports  |    |  November 15, 2019

2019 AI Now Report

Written and published by AI Now Institute. 100 pages.

Exerpt:

Recommendations

1. Regulators should ban the use of affect recognition in important decisions that impact people’s lives and access to opportunities. Until then, AI companies should stop deploying it. Given the contested scientific foundations of affect recognition technology—a subclass of facial recognition that claims to detect things such as personality, emotions, mental health, and other interior states—it should not be allowed to play a role in important decisions about human lives, such as who is interviewed or hired for a job, the price of insurance, patient pain assessments, or student performance in school. Building on last year’s recommendation for stringent regulation, governments should specifically prohibit use of affect recognition in high-stakes decision-making processes.

2. Government and business should halt all use of facial recognition in sensitive social and political contexts until the risks are fully studied and adequate regulations are in place. In 2019, there has been a rapid expansion of facial recognition in many domains. Yet there is mounting evidence that this technology causes serious harm, most often to people of color and the poor. There should be a moratorium on all uses of facial recognition in sensitive social and political domains—including surveillance, policing, education, and employment—where facial recognition poses risks and consequences that cannot be remedied retroactively. Lawmakers must supplement a moratorium with (1) transparency requirements that allow researchers, policymakers, and communities to assess and understand the best possible approach to restricting and regulating facial recognition; and (2) protections that provide the communities on whom such technologies are used with the power to make their own evaluations and rejections of its deployment.

3. The AI industry needs to make significant structural changes to address systemic racism, misogyny, and lack of diversity. The AI industry is strikingly homogeneous, due in large part to its treatment of women, people of color, gender minorities, and other underrepresented groups. To begin addressing this problem, more information should be shared publicly about compensation levels, response rates to harassment and discrimination, and hiring practices. It also requires ending pay and opportunity inequality and providing real incentives for executives to create, promote, and protect inclusive workplaces. Finally, any measures taken should address the two-tiered workforce, in which many of the people of color at tech companies work as undercompensated and vulnerable temporary workers, vendors, or contractors.

4. AI bias research should move beyond technical fixes to address the broader politics and consequences of AI’s use. Research on AI bias and fairness has begun to expand beyond technical solutions that target statistical parity, but there needs to be a much more rigorous examination of AI’s politics and consequences, including close attention to AI’s classification practices and harms. This will require that the field center “non-technical” disciplines whose work traditionally examines such issues, including science and technology studies, critical race studies, disability studies, and other disciplines keenly attuned to social context, including how difference is constructed, the work of classification, and its consequences.

5. Governments should mandate public disclosure of the AI industry’s climate impact. Given the significant environmental impacts of AI development, as well as the concentration of power in the AI industry, it is important for governments to ensure that large-scale AI providers disclose the climate costs of AI development to the public. As with similar requirements for the automotive and airline industries, such disclosure helps provide the foundation for more informed collective choices around climate and technology. Disclosure should include notifications that allow developers and researchers to understand the specific climate cost of their use of AI infrastructure. Climate-impact reporting should be separate from any accounting for offsets or other mitigation strategies. In addition, governments should use that data to ensure that AI policies take into account the climate impacts of any proposed AI deployment.

6. Workers should have the right to contest exploitative and invasive AI—and unions can help. The introduction of AI-enabled labor-management systems raises significant questions about worker rights and safety. The use of these systems—from Amazon warehouses to Uber and InstaCart—pools power and control in the hands of employers and harms mainly low-wage workers (who are disproportionately people of color) by setting productivity targets linked to chronic injuries, psychological stress, and even death and by imposing unpredictable algorithmic wage cuts that undermine economic stability. Workers deserve the right to contest such determinations, and to collectively agree on workplace standards that are safe, fair, and predictable. Unions have traditionally been an important part of this process, which underscores the need for companies to allow their workers to organize without fear of retaliation.

7. Tech workers should have the right to know what they are building and to contest unethical or harmful uses of their work. Over the last two years, organized tech workers and whistleblowers have emerged as a powerful force for AI accountability, exposing secretive contracts and plans for harmful products, from autonomous weapons to tracking-and-surveillance infrastructure. Given the general-purpose nature of most AI technology, the engineers designing and developing a system are often unaware of how it will ultimately be used. An object-recognition model trained to enable aerial surveillance could just as easily be applied to disaster relief as it could to weapons targeting. Too often, decisions about how AI is used are left to sales departments and executives, hidden behind highly confidential contractual agreements that are inaccessible to workers and the public. Companies should ensure that workers are able to track where their work is being applied, by whom, and to what end. Providing such information enables workers to make ethical choices and gives them power to collectively contest harmful applications.

8. States should craft expanded biometric privacy laws that regulate both public and private actors. Biometric data, from DNA to faceprints, is at the core of many harmful AI systems. Over a decade ago, Illinois adopted the Biometric Information Privacy Act (BIPA), which has now become one of the strongest and most effective privacy protections in the United States. BIPA allows individuals to sue for almost any unauthorized collection and use of their biometric data by a private actor, including for surveillance, tracking, and profiling via facial recognition. BIPA also shuts down the gray and black markets that sell data and make it vulnerable to breaches and exploitation. States that adopt BIPA should expand it to include government use, which will mitigate many of biometric AI’s harms, especially in parallel with other approaches, such as moratoriums and prohibitions.

9. Lawmakers need to regulate the integration of public and private surveillance infrastructures. This year, there was a surge in the integration of privately owned technological infrastructures with public systems, from “smart” cities to property tech to neighborhood surveillance systems such as Amazon’s Ring and Rekognition. Large tech companies like Amazon, Microsoft, and Google also pursued major military and surveillance contracts, further enmeshing those interests. Across Asia, Africa, and Latin America, multiple governments continue to roll out biometric ID projects that create the infrastructure for both state and commercial surveillance. Yet few regulatory regimes govern this intersection. We need strong transparency, accountability, and oversight in these areas, such as recent efforts to mandate public disclosure and debate of public-private tech partnerships, contracts, and acquisitions.

10. Algorithmic Impact Assessments must account for AI’s impact on climate, health, and geographical displacement. Algorithmic Impact Assessments (AIAs)2 help governments, companies, and communities assess the social implications of AI, and determine whether and how to use AI systems. Those using AIAs should expand them so that in addition to considering issues of bias, discrimination, and due process, the isues of climate, health, and geographical displacement are included.

11. Machine learning researchers should account for potential risks and harms and better document the origins of their models and data. Advances in understanding of bias, fairness, and justice in machine learning research make it clear that assessments of risks and harms are imperative. In addition, using new mechanisms for documenting data provenance and the specificities of individual machine learning models should also become standard research practice. Both Model Cards and Datasheets offer useful templates. As a community, machine learning researchers need to embrace these analyses and tools to create an infrastructure that better considers the implications of AI.

12. Lawmakers should require informed consent for use of any personal data in health-related AI. The application of AI in healthcare requires greater protections around data. While the informed-consent process that biomedical researchers and healthcare professionals generally employ in clinical settings requires discussion of the risks and benefits involved, affirmative approval before proceeding, and reasonable opportunities to withdraw from the study or treatment, engineers and scientists commonly create training sets by scraping content from whatever public sources are available. In order to ensure a future that does not amplify and reinforce historic injustices and social harms, AI health systems need better informed-consent approaches and more research to understand their implications in light of systemic health inequities, the organizational practices of healthcare, and diverse cultural approaches to health.


EXECUTIVE SUMMARY

In last year’s report, we focused on AI’s accountability gap, and asked who is responsible when AI systems harm us, and how we might remedy those harms. Lack of accountability emerged as a real and substantial problem—one that governments, companies, and civil society were just beginning to grapple with, even as AI’s deployment into sensitive social domains accelerated.

This year we saw a wave of pushback, as community groups, researchers, policymakers, and workers demanded a halt to risky and dangerous AI. AI Now’s 2019 report spotlights these growing movements, examining the coalitions involved and the research, arguments, and tactics used. We also examine the specific harms these coalitions are resisting, from AI-enabled management of workers, to algorithmic determinations of benefits and social services, to surveillance and tracking of immigrants and underrepresented communities. What becomes clear is that across diverse domains and contexts, AI is widening inequality, placing information and control in the hands of those who already have power and further disempowering those who don’t. The way in which AI is increasing existing power asymmetries forms the core of our analysis, and from this perspective we examine what researchers, advocates, and policymakers can do to meaningfully address this imbalance.

In doing so, the following key themes emerge:

  • The spread of algorithmic management technology in the workplace is increasing the power asymmetry between workers and employers. AI threatens not only to disproportionately displace lower-wage earners, but also to reduce wages, job security, and other protections for those who need it most.
  • Community groups, workers, journalists, and researchers—not corporate AI ethics statements and policies—have been primarily responsible for pressuring tech companies and governments to set guardrails on the use of AI.
  • Efforts to regulate AI systems are underway, but they are being outpaced by government adoption of AI systems to surveil and control.
  • AI systems are continuing to amplify race and gender disparities via techniques like affect recognition, which has no sound scientific basis.
  • Growing investment in and development of AI has profound implications in areas ranging from climate change to the rights of healthcare patients to the future of geopolitics and inequities being reinforced in regions in the global South.

As with our previous reports, we present these findings and concerns in the spirit of engagement, and with the hope that we can contribute to a more holistic understanding of AI that centers the perspectives and needs of those most affected, and that shapes technical development and deployment to these ends. [ . . . ]


Reprinted with permission. Creative Commons License.