The American College of Physicians (ACP) has issued a policy position paper outlining 10 recommendation statements on the use of artificial intelligence (AI) and machine learning technologies in health care applications, such as clinical documentation, diagnostic imaging processing, and clinical decision support, published in the Annals of Internal Medicine.
The use of AI tools in medicine has been steadily increasing since the 1970s; however, the United States Food and Drug Administration has approved more AI technologies for clinical application between January 2020 and October 2023 than it had in the previous 25 years. The ACP recognize that the recent rise in generative AI tools and wide applicability has caused a significant interest and enthusiasm in its clinical use, thereby necessitating oversight and regulation.
In its first recommendation statement, the ACP states that AI-enabled technologies should “complement and not supplant” the logic and decision-making of physicians and other clinicians, and that a physician’s training and observations must continue to be the main focus of patient care. Furthermore, the ACP firmly states that AI technology should be consistent with the principles of medical ethics for enhancing patient care, clinical decision-making, the patient–physician relationship, and health care equity and justice.
The ACP recommends that patients, physicians, and other clinicians are made aware when AI tools are being used in treatment and decision-making and, therefore, advocates for transparency, clarity, and education for providers on where AI-generated information is entering workflows. The group states that the privacy and confidentiality of patient and clinician data should be prioritized and that clinical safety, effectiveness, and health equity should be a top priority for AI developers, researchers, implementers, and regulating bodies. The ACP also recommends implementing a continuous improvement process that includes feedback based on end-user testing in a real-world clinical context using diverse patient demographics and peer-reviewed research.
“
Along with best practices, research, regulatory guidance, and oversight are needed to ensure the safe, effective, and ethical use of these technologies.
AI and other emerging technologies should be designed to reduce disparities in health and health care, noted the ACP, which advocates that Congress, the US Department of Health and Human Services, and other entities support research and analysis of AI data in order to identify disparate or discriminatory effects.
The ACP states that AI developers should be accountable for the performance of their models and include a coordinated federal AI strategy with governmental and nongovernmental regulatory bodies. Collaborative research and development efforts from these bodies should focus on “ways to mitigate biases in any established or future algorithmic technology.” Existing and future AI-related policies and guidance should be enforced and allow for the reporting of AI-related adverse events.
The ACP also states that AI tools should be utilized to lower clinician burden. For example, use of AI in performing patient intake, scheduling, and previous authorization functions could reduce the cognitive burden of providers.
The ACP recommends providing training for individuals at all levels of medical education so that physicians are able to effectively practice in AI-enabled health care systems. “Physicians are far less likely to use AI tools if they do not understand, or trust, the output of AI systems,” the authors stated. “Therefore, to increase and improve AI use and usefulness, the creation and dissemination of clear and comprehensive educational materials to clinicians and end users of AI is crucial.”
Finally, the environmental effects of AI should be investigated and mitigated throughout the AI process, according to the ACP, because the lack of standardized measures currently limits the ability to address the potentially negative climate effects of AI.
“The widespread clinical safety implications of these advanced AI and [machine learning] tools are not yet fully understood by physicians, and the totality of risks for patients has yet to be identified,” the ACP authors stated. “Along with best practices, research, regulatory guidance, and oversight are needed to ensure the safe, effective, and ethical use of these technologies.”
Read more: AI in Cardiology