A new research report from the Yale School of Medicine offers an in-depth look at how biased artificial intelligence can affect clinical outcomes. The study specifically focuses on the different stages of AI model development and shows how data integrity issues can impact health equity and quality of care.
WHY IT IS IMPORTANT
Published earlier this month in PLOS Digital HealthThe research provides both real and hypothetical illustrations of how AI bias negatively affects healthcare delivery – not just at the point of care, but at every stage of medical AI development: data training, model development, publication and implementation.
“Positive bias, negative bias,” said the study’s lead author, John Onofrey, assistant professor of radiology and biomedical imaging and urology at the Yale School of Medicine, in a press release.
“Having worked in the machine learning/AI field for many years now, the idea that there are biases in algorithms is not surprising,” he said. “However, listing all the potential ways bias can enter the AI learning process is incredible. It makes mitigating bias seem like a daunting task.”
As the study notes, bias can appear almost anywhere in the algorithm development process.
This can happen in “data characteristics and labels, model development and evaluation, deployment and publication,” the researchers explain. “Insufficient sample sizes for certain patient groups can lead to suboptimal performance, algorithm underestimation, and clinically meaningless predictions. Missing patient results can also produce biased model behavior, including captureable but non-randomly missing data, such as diagnosis codes, and data that is generally not available. or difficult to grasp, such as social determinants of health.
Meanwhile, “expert-annotated labels used to train supervised learning models may reflect implicit cognitive biases or substandard care practices. Overreliance on performance measures during model development can mask bias and diminish the clinical utility of a model. When applied to data outside of the training cohort, model performance may deteriorate compared to previous validation and may do so differentially across subgroups.
And, of course, the way clinical end users interact with AI models can also introduce biases of its own.
Ultimately, “here, AI models are developed and published, and by whom, they impact the trajectories and priorities of future medical AI development,” the Yale researchers say.
They note that all efforts to mitigate this bias – “collection of large and diverse data sets, statistical debiasing methods, in-depth model evaluation, emphasis on model interpretability, and standardized reporting requirements on bias and transparency” – must be implemented carefully, with a watchful eye. on how these safeguards will work to prevent adverse effects on patient care.
“Prior to concrete implementation in clinical settings, rigorous validation through clinical trials is essential to demonstrate unbiased application,” they said. “Addressing bias throughout the model development stages is crucial to ensure all patients benefit equitably from the future of medical AI. »
But the report, “Bias in medical AI: implications for clinical decision-making“, offers some suggestions for mitigating this bias, with the goal of improving health equity.
For example, previous research has shown that using race as a factor in estimating kidney function can sometimes result in longer wait times for black transplant recipients to be placed on transplant lists. Yale researchers offer several recommendations to help future AI algorithms use more precise metrics, such as zip code and other socioeconomic factors.
ON THE FILE
“Greater capture and use of social determinants of health in AI medical models for clinical risk prediction will be paramount,” said James L. Cross, a first-year medical student at the Yale School of Medicine and first author of the study, in a press release.
“Bias is a human problem,” added Yale Associate Professor of Radiology and Biomedical Imaging and study co-author Dr. Michael Choma. “When we talk about ‘bias in AI,’ we need to remember that computers learn from us.”
Mike Miliard is Editor-in-Chief of Healthcare IT News
Email the author: mike.miliard@himssmedia.com
Healthcare IT News is a HIMSS publication.