Blog post by Lindsey Anderson. Published by Access Now.
Across the globe we are seeing examples of how artificial intelligence can be implemented in ways that can either benefit or hurt societies. In our new report, Human Rights in the Age of Artificial Intelligence, we look at the implications of the growth in AI-powered technologies through a human rights lens. In addition to the report, we also encourage you to review an accompanying case study which examines how AI is used to conduct surveillance.
Why human rights matter in the AI debate
Imagine you are a farmer struggling to maintain your small family farm. You have lost crops to drought and pests in recent years, and you’re thinking of selling the property because you are going into increasing debt. Luckily for you, in the past few years AI has paired up with increasingly affordable Internet of Things devices to enable precision farming. You install sensors in your fields and hook them up to an AI system that pulls together real-time data from the sensors and combines it with satellite imagery and weather data. The system helps you manage scarce water resources by identifying optimal times to irrigate, and helps you catch pest infestations and diseases before they spread. Your farm is now more productive than ever before, and you are no longer at risk of losing it.
AI development has taken off in recent years, and although it can be used in ways that benefit society — advancing the diagnosis and treatment of disease, revolutionizing transportation and urban living, and mitigating the effects of climate change — AI can also be used in ways that result in significant harm. The same data processing and analysis capabilities of AI that are used, for example, to measure and respond to demands on public infrastructure, can also enable systems for mass surveillance. AI can be used to identify and discriminate against the most vulnerable in society and may revolutionize the economy so quickly no job retraining program can possibly keep up. Additionally, the complexity of AI systems means that their outputs are often hard if not impossible to fully explain. We are deploying these opaque systems rapidly and often carelessly, yet use of AI for data analytics and algorithmic decision-making can have an immediate, negative impact on people’s lives, with the potential to hurt our rights on a scale never seen before. [ . . . ]