Article by Asher Wilk.
The cyberspace and development of intelligent systems using Artificial Intelligence (AI) creates new challenges to computer professionals, data scientists, regulators and policy makers. For example, self-driving cars raise new technical, ethical, legal and public policy issues. This paper proposes a course named Computers, Ethics, Law, and Public Policy, and suggests a curriculum for such a course. This paper presents ethical, legal, and public policy issues relevant to building and using intelligent systems.
Recently robots and intelligent systems are equipped with artificial intelligence (AI) and many more will be in the near future. Firms aim to design intelligent systems capable of making their own decisions (autonomous systems). Such systems will need to include moral components (programs) that guide them. For instance, to decide whether a selfdriving car should be instructed to swerve to avoid hitting a solid obstacle so as to protect its own occupants even if such a move will lead to hitting a car in another lane. This is an ethical dilemma that reminds us of the trolley problem (e.g., a trolley coming down a track and a person at a switch must choose whether to let the trolley follow its course and kill five people or to redirect it to another track and kill just one).
This paper suggests what should be taught in a course named Computers, Ethics, Law, and Public Policy, that is intended for persons involved with AI, new technologies, computer science, information science, and engineering. It presents teaching strategies and suggests teaching ethics and law using examples and case studies demonstrating ethical and legal decision-making. Nowadays, education should not only be about obtaining knowledge, but on addressing critical thinking and decision-making. This paper is based on my experiences in teaching ethics and law to computer science students and to those studying international policy . . .