News  |    |  August 17, 2019

Problems in AI Alignment that philosophers could potentially contribute to

News post by Wei Dai. Published on the AI Alignment Forum website.

Excerpt:

It occurs to me that another reason for the lack of engagement by people with philosophy backgrounds may be that philosophers aren’t aware of the many philosophical problems in AI alignment that they could potentially contribute to. So here’s a list of philosophical problems that have come up just in my own thinking about AI alignment.

  • Decision theory for AI / AI designers
    • How to resolve standard debates in decision theory?
    • Logical counterfactuals
    • Open source game theory
    • Acausal game theory / reasoning about distant superintelligences
  • Infinite/multiversal/astronomical ethics
    • Should we (or our AI) care much more about a universe that is capable of doing a lot more computations?
    • What kinds of (e.g. spatial-temporal) discounting is necessary and/or desirable?

[ . . . ]