Articles  |    |  December 9, 2017

Does Situationism Threaten Free Will and Moral Responsibility?

Journal article by Michael McKenna and Brandon Warmke. Published in Journal of Moral Philosophy.

Abstract:

The situationist movement in social psychology has caused a considerable stir in philosophy. Much of this was prompted by the work of Gilbert Harman and John Doris. Both contended that familiar philosophical assumptions about the role of character in the explanation of action were not supported by experimental results. Most of the ensuing philosophical controversy has focused upon issues related to moral psychology and ethical theory. More recently, the influence of situationism has also given rise to questions regarding free will and moral responsibility. There is cause for concern that a range of situationist findings are in tension with the reasons-responsiveness putatively required for free will and moral responsibility. We develop and defend a response to the alleged situationist threat to free will and moral responsibility that we call pessimistic realism. We conclude on an optimistic note, exploring the possibility of strengthening our agency in the face of situational influences.

Excerpt:

(A thought experiment which involves two Artificial Intelligence engineers who come to different conclusions about how to conceptualize certain limits to the machine they designed.)

In order to tease out our remaining pessimistic worries, imagine two A.I. engineers, Geno and Georgiana. Geno and Georgianna work for an auto repair company, and their job is to develop a new and improved machine sensitive to emissions problems in cars. Drive a car over a ramp as you move through any gas station, and the new gizmo will diagnose a vast array of emission problems with the car, rule out false positives if the car has bad gas but otherwise functions well, pinpoint what sorts of emission problems are at issue, and suggest repairs and their urgency. The machine is, in a sense, a limited reasons-responsive “agent.” Indeed, we can even attribute to it receptivity and reactivity characteristics. Maybe sometimes, when another car is too close, it gets “noise” that looks like a good reason for a diagnosis but isn’t. And suppose the ai system is reasonably able to sort this out and eliminate the bad noise. So it has “receptive” resources. Also, suppose it is able to yield different reports by telling the operator of the car that the problem is not too bad and the car would still pass an emissions test but is only less than optimally efficient. Or instead it can report that the car has a vapor leak and is dangerous to drive, and so on. Thus it can be differentially reactive. Suppose also that it is able to “learn,” maybe getting signals sent from computer chips in the cars so that it can upgrade and so on. [ . . . ]