4. Bias and inequality: If the data used to train an AI system contains even the faintest hint of bias, according to the report, that bias will be present in the actual AI.
“For instance, if the data available for AI are principally gathered in academic medical centers, the resulting AI systems will know less about—and therefore will treat less effectively—patients from populations that do not typically frequent academic medical centers,” Price II wrote. “Similarly, if speech-recognition AI systems are used to transcribe encounter notes, such AI may perform worse when the provider is of a race or gender underrepresented in training data.”
5. Professional realignment: One long-term risk of implementing AI technology is that it could lead to “shifts in the medical profession.”
“Some medical specialties, such as radiology, are likely to shift substantially as much of their work becomes automatable,” Price II wrote. “Some scholars are concerned that the widespread use of AI will result in decreased human knowledge and capacity over time, such that providers lose the ability to catch and correct AI errors and further to develop medical knowledge.”
(More AI in Healthcare coverage of this specific risk can be read here, here and here.)
6. The nirvana fallacy: The nirvana fallacy, Price II explained, occurs when a new option is compared to an ideal scenario instead of what came before it. Patient care may not be 100% perfect after the implementation of AI, in other words, but that doesn’t mean things should remain the same as they’ve always been.
Could this phenomenon occur and lead to inaction in the American healthcare system?