AI algorithms increasingly used to treat and diagnose patients may have biases and blind spots that could hinder health care for Black and Latinx patients, according to research co-authored by a Rutgers data scientist -Newark.
Fay Cobb Payton, professor of mathematics and computer science, has studied how AI technology and algorithms often rely on data that can lead to generalizations about patients of color, without considering their cultural background and their daily living conditions.
Payton, who serves as special advisor to the chancellor for inclusive innovation at Rutgers-Newark, recently co-authored findings on AI and healthcare inequities for The Milbank Quarterly, which explores population health and health policy. Other authors were Thelma C. Hurd of the Institute on Health Disparities, Equity and the Exposome at Meharry Medical College, and Darryl B. Hood of the Ohio State University College of Public Health.
Payton is co-founder of the Institute for Data Sciences, Research and Innovation (IDRIS) at Rutgers, which combines interdisciplinary research in medicine, public health, business, cultural studies and technology. Part of its mission is to find the best ways to use data to serve the community and to uncover the intersections of data, technology and society across all fields.
The study co-authored by Payton found that due to lack of representation among AI developers and black and brown patients being underrepresented in medical research, algorithms can perpetuate false assumptions and may miss nuances that can be provided by a more diverse field. developers and patient data. Healthcare providers can also play an important role in ensuring that treatment transcends the algorithm.
“How does the data come into the system and reflect the population we are trying to serve? » asked Payton. “It’s also a human being, like a service provider, who does the interpretation. Have we determined if there is a human in the loop at any time? Some form of human intervention is necessary everywhere.
The algorithms rely on “big data,” such as medical records, imaging, and biomarker values. But they don’t incorporate the “small data,” such as social determinants of health, including access to transportation, healthy food, and the patient’s community and work schedule, the study found. This can make it more difficult for patients to comply with treatment plans that require frequent doctor visits, physical activity and other measures.
“This does not take into account the cost of fresh produce. This may not take into account the fact that a person does not have access to transportation but works two jobs. They may be trying to do everything the doctors say, but the assumption is they’re not adhering to it because no one has explained to them why,” Payton said.
“This creates a ‘tropical’ characterization of Black patients and may impact patients’ perceptions of the reliability of the healthcare system,” according to the study.
“The algorithm can suggest a treatment plan,” Payton said. “This may indicate what resources should be used to treat the patient, and these recommendations may not take into account where they live, work and play.”
Without socioeconomic considerations based on patients’ daily lives, treatment could suffer. But results can improve with more information. For example, doctors can prescribe longer-acting medications and procedures that don’t require travel, Payton said.
Algorithmic biases may also fail to account for disparities in health care outcomes, such as an overall mortality rate that is nearly 30 percent higher for non-Hispanic black patients than for non-Hispanic white patients, a figure which may also be attributed to the higher rates in some patients. diseases.
“Overall, they are experiencing more cases of heart disease, stroke and diabetes. Black women suffer from more severe breast cancer,” Payton said. “That’s what the data says, however, preliminary research shows that algorithms can be racist, even when black patients are sicker than the rest of the population This can lead to misdiagnosis, access to adequate resources or delays in treatment.
The algorithm’s inability to take into account a patient’s location raises additional concerns.
“Most US patient data comes from three states: California, Massachusetts and New York. “That in itself is problematic. If they live in rural Mississippi, they may not be able to take a reliable train or bus. These kinds of things impact the delivery of care and potentially the quality of care,” Payton said.
One solution lies in greater diversity among technology developers but also among doctors, according to research by Payton and his colleagues.
Only 5 percent of active physicians in 2018 identified as Black and about 6 percent as Hispanic or Latinx, according to sources cited in the study. The percentage of underrepresented developers is even lower.
“It’s important to understand the biases that exist in traditional education and among healthcare professionals,” Payton says. “It is essential that developers have technical and domain skills to better understand healthcare (and the field in question).”
There also needs to be a more rigorous process for reviewing and evaluating the data fed to algorithms so that it is not subject to bias that could exacerbate health care disparities, according to a study by Payton and colleagues. colleagues.
With the right safeguards in place, AI’s opportunities to help patients can expand, Payton said.
The research offers a list of recommendations that can reduce potential harm, including collective action between healthcare stakeholders, developers, end users and policy makers throughout the AI lifecycle. This has already made a difference in the lives of some patients.
“Predictive analytics can analyze large data sets to detect patterns and risk factors associated with diseases. AI can help healthcare professionals analyze images to detect diseases. Generative AI (GenAI) using text, images, audio and video of events can inform patient monitoring,” Payton said. “Despite concerns related to bias, privacy, security and several other concerns, AI may have the potential to do good. »