Conference Paper by Andrea Loreggia, Nicholas Mattei, Francesca Rossi and Kristen Brent Venable .
Presented at the 2018 AAAI/ACM Conference on AI, Ethics, and Society.
If we want people to trust AI systems, we need to provide the systems we create with the ability to discriminate between what humans would consider good and bad decisions. The quality of a decision should not be based only on the preferences or optimization criteria of the decision makers, but also on other properties related to the impact of the decision, such as whether it is ethical, or if it complies to constraints and priorities given by feasibility constraints or safety regulations. The CP-net formalism is a convenient and expressive way to model preferences, providing an effective compact way to qualitatively model preferences over outcomes, i.e., decisions, with a combinatorial structure. If we wish to incorporate ethical, moral, or norms based constraints to a decision context, it means that the subjective preferences of the decision makers are not the only source of information we should consider. Indeed, depending on the context, we may have to consider specific ethical principles derived from an appropriate ethical theory or various laws and norms. While preferences are important, when preferences and ethical principles are in conflict, the principles should override the subjective preferences of the decision maker. Therefore, it is essential to have well founded techniques to evaluate whether preferences are compatible with a set of ethical principles, and to measure how much these preferences deviate from the ethical principles.