Journal article by Nicolas Terry.
Published in Yale Journal of Law and Technology.
Advances in healthcare artificial intelligence (AI) will seriously challenge the robustness and appropriateness of our current healthcare regulatory models. These models primarily regulate medical persons using the “practice of medicine” touchstone or medical machines that meet the FDA definition of “device.” However, neither model seems particularly appropriate for regulating machines practicing medicine or the complex man-machine relationships that will develop. Additionally, healthcare AI will join other technologies such as big data and mobile health apps in highlighting current deficiencies in healthcare regulatory models, particularly in data protection. The article first suggests a typology for healthcare AI technologies based in large part of their potential for substituting for humans and follows with a critical examination of the existing healthcare regulatory mechanisms (device regulation, licensure, privacy and confidentiality, reimbursement, market forces, and litigation) as they would be applied to AI. The article then explores the normative principles that should underly regulation and sketches out the imperatives for a new regulatory structure such as quality, safety, efficacy, a modern data protection construct, cost-effectiveness, empathy, health equity, and transparency. Throughout it is argued that the regulation of healthcare AI will require some fresh thinking underpinned by broadly embraced ethical and moral values, and adopting holistic, universal, contextually aware, and responsive regulatory approaches to what will be major shifts in the man-machine relationship.
About the Author
Nicolas Terry is Hall Render Professor of Law, Executive Director, Hall Center for Law and Health, Indiana University Robert H. McKinney School of Law.