News article by Andrej Kovacevic.
Published by Hacker Noon.
In 1950, Alan Turing first proposed a means to determine if a machine had developed the ability to think independently, giving rise to the concept that we now recognize as artificial intelligence (AI). Almost immediately afterward, researchers, journalists, and politicians began to ponder the implications of such a technology, wondering what sort of ethical constructs would be necessary to regulate it.
For decades, those discussions remained mostly theoretical. After all, nobody had yet come anywhere close to a real, functional AI system. Today, however, with developers closing in on that goal, the idea of ethics in AI has roared back into the public consciousness. Technology CEOs are voicing concerns over what unchecked AI might do to society, ethics are a frequent topic at today’s AI conferences – even the Pope has weighed in on the subject.
It’s clear that there are some difficult questions that need to be answered about how we, as a society, will mutually agree to both use – and not use – AI technology. To help get a specific and productive conversation started, here are three AI ethics questions we should settle as quickly as possible.
- How to Handle AI-Driven Crime
- Preventing Bias in AI-Powered Systems
- Managing Labor Displacement Due to AI
[ . . . ]
About the Author
Andrej Kovacevic is a dedicated writer and digital evangelist.