Researchers Propose Framework for AI Use in Health Care
- Email ckiz@andrew.cmu.edu
- Phone 412-554-0074
Health care organizations are looking to artificial intelligence (AI) tools to improve patient care, but the translation into clinical settings has been inconsistent, in part because evaluating AI in health care remains challenging. A multi-institutional team of researchers propose a framework for using AI that includes practical guidance for applying values and that incorporates not just the tool鈥檚 properties but the systems surrounding its use. The article was published in November issue of the journal 鈥.鈥
鈥淩egulatory guidelines and institutional approaches have focused narrowly on the performance of AI tools, neglecting knowledge, practices and procedures necessary to integrate the model within the larger social systems of medical practice,鈥 said Alex John London, the K&L Gates Professor of Ethics and Computational Technologies in the Department of Philosophy at Carnegie Mellon and coauthor of the study. 鈥淭ools are not neutral鈥攖hey reflect our values鈥攕o how they work reflects the people, processes and environments in which they are put to work.鈥
London is also director of Carnegie Mellon鈥檚 Center for Ethics and Policy and chief ethicist at Carnegie Mellon鈥檚 Block Center for Technology and Society.
London and his coauthors advocate for a conceptual shift in which AI tools are viewed as parts of a larger 鈥渋ntervention ensemble,鈥 a set of knowledge, practices and procedures that are necessary to deliver care to patients. In previous work with other colleagues, London has applied this concept to pharmaceuticals and to autonomous vehicles. The approach treats AI tools as 鈥渟ociotechnical systems,鈥 and the authors鈥 proposed framework seeks to advance the responsible integration of AI systems into health care.
Previous work in this area has been largely descriptive, explaining how AI systems interact with human systems. The framework proposed by London and his colleagues is proactive, providing guidance to designers, funders and users about how to ensure that AI systems can be integrated into workflows with the greatest potential to help patients. Their approach can also be used for regulation and institutional insights, as well as for appraising, evaluating and using AI tools responsibly and ethically. To illustrate their framework, the authors apply it to the development of AI systems developed for diagnosing more than mild diabetic retinopathy.
鈥淥nly a small majority of models evaluated through clinical trials have shown a net benefit,鈥 said Melissa McCradden, a bioethicist at the Hospital for Sick Children, assistant professor of Clinical and Public Health at the Dalla Lana School of Public Health and coauthor on the study. 鈥淲e hope our proposed framework lends precision to evaluation and interests regulatory bodies exploring the kinds of evidence needed to support the oversight of AI systems.鈥
London and McCradden were joined by Shalmali Joshi at Columbia University and James A. Anderson at The Hospital for Sick Children on the study, titled 鈥淎 normative framework for artificial intelligence as a sociotechnical system in healthcare.鈥