Does AI really know best? Exploring the role of artificial intelligence in clinical care
Summary:
Hear from two SickKids researchers who are leading the discussion around ethical and care implications of using artificial intelligence in medical decision making.
From ChatGPT to robust machine-learning algorithms, artificial intelligence (AI) is already having an unquestionable impact on society and the world of scientific research. But how do we combine these advancements with human expertise and the wishes of patients and families when it comes to health care?
Dr. Melissa McCradden, a Bioethicist with the Bioethics Department and Associate Scientist in the Genetics & Genome Biology program, and Dr. Roxanne Kirsch, a Clinical Associate in the Bioethics Department and Staff Physician in the Cardiac Critical Care Unit, are experienced in systematically exploring ethical issues in health care. McCradden is also the John and Melinda Thompson Director of Artificial Intelligence in Medicine for Kids (AIM), a program that seeks to support SickKids innovators in developing and integrating AI into care delivery.
In a recent article published in Nature Medicine, these two researchers explore whether AI really does “know best” and where it can fit in the provision of medical care, including common and confusing misconceptions about the role of AI in health care.
To unravel some of these thoughts a little further, we asked McCradden and Kirsch to discuss some of the key points behind the article.
Why is it important to think critically about the role of artificial intelligence in medicine?
“Humans have always loved technology, and although adoption has been slower in the health-care sector compared with the private sector, there is still evidence of hyperbolic claims and misunderstandings surrounding the role and nature of AI. Wording in media can be quite strong – for example, suggesting doctors can be replaced – and it gives a problematic impression of technology’s role and function,” explains McCradden, who is leading the bioethical framework development that governs ethical AI model integration at SickKids in her role at AIM.
“These misrepresentations and the resulting impressions can complicate the dynamic between the doctor and the patient and family. By being clear about what each brings to the table and what AI will likely do in the future of care – support doctors, rather than replace – we can retain the best of both AI and human decision-making.”
How can we help address and correct the power imbalances imposed by artificial intelligence?
“By approaching each patient encounter with an awareness of the imbalance, it allows the clinician to then provide space for patient and family viewpoints,” explains Kirsch, who is also the Associate Chief, Equity, Diversity, Inclusion, Wellness and Faculty Development for Perioperative Services.
“For example, physicians can help ensure clear and informed consent around a treatment plan by explaining their thinking behind a recommendation they’re providing to a family, including the role that AI prediction played in the decision-making. That way, patients and families are not only aware of the extent to which AI may be impacting their care, there’s room carved out to talk about it with their doctor.”
Where does artificial intelligence fit in the future of clinical decision-making?
“AI’s unique benefit is its ability to compute many different and additional sources of information much faster than any other technology we’ve had before. However, like many of the test results and various inputs that are taken into consideration now, clinical decision-making still requires the clinician to interpret and consider those inputs in the context of the individual patient they are caring for,” says Kirsch. “Both Dr. McCradden and I advocate for clinicians to continue to centre shared decision-making and inclusive practices, with AI forming a part of the medical evidence that clinicians draw from to provide medical recommendations.
McCradden adds: “We anticipate AI will provide an additional stream of high-quality information to support decision-making by the clinician. These sources of information should then be integrated with the patient values, goals and preferences to reach a decision about how to proceed with that person's individual care plan.”
Want to learn more? Read the full article, “Patient wisdom should be incorporated into health AI to avoid algorithmic paternalism”.