
Center for Medical Ethics and Health Policy Staff Publications
Publication Date
12-1-2023
Journal
Digital Society
DOI
10.1007/s44206-023-00073-z
PMID
38596344
PMCID
PMC11003475
PubMedCentral® Posted Date
4-9-2024
PubMedCentral® Full Text Version
Author MSS
Published Open-Access
yes
Keywords
Artificial intelligence, Responsibilization, Black box healthcare AI, Responsibility gaps, Shared responsibility
Abstract
As sophisticated artificial intelligence software becomes more ubiquitously and more intimately integrated within domains of traditionally human endeavor, many are raising questions over how responsibility (be it moral, legal, or causal) can be understood for an AI’s actions or influence on an outcome. So called “responsibility gaps” occur whenever there exists an apparent chasm in the ordinary attribution of moral blame or responsibility when an AI automates physical or cognitive labor otherwise performed by human beings and commits an error. Healthcare administration is an industry ripe for responsibility gaps produced by these kinds of AI. The moral stakes of healthcare are often life and death, and the demand for reducing clinical uncertainty while standardizing care incentivizes the development and integration of AI diagnosticians and prognosticators. In this paper, we argue that (1) responsibility gaps are generated by “black box” healthcare AI, (2) the presence of responsibility gaps (if unaddressed) creates serious moral problems, (3) a suitable solution is for relevant stakeholders to voluntarily responsibilize the gaps, taking on some moral responsibility for things they are not, strictly speaking, blameworthy for, and (4) should this solution be taken, black box healthcare AI will be permissible in the provision of healthcare.
Included in
Artificial Intelligence and Robotics Commons, Bioethics and Medical Ethics Commons, Health Policy Commons