Patient-centered care has been at the forefront of healthcare for over a decade, with scholarship identifying how this critical issue has been a concern for both healthcare organizations and patients [1, 2]. Focusing on the ideals of whole-person care and shared decision-making [3], patient-centered care has improved health outcomes, reduced medical costs, and enhanced patient and physician satisfaction [2, 4, 5]. With the rapid advancement of artificial intelligence (AI) in healthcare, some scholars are concerned that developments in patient-centered care will be overlooked in favor of technological improvements or, more importantly, that the use of AI will directly conflict with the ideals of patient-centered care [3]. While AI has the potential to revolutionize healthcare [6], concerns over how AI is incorporated within the context of patient-centered care need to be addressed to create patient buy-in and ensure the effective adoption of these technologies. Moreover, choices that promote decision self-efficacy and patient perspectives on this emerging technology should be incorporated so that the use of AI in healthcare is beneficial and equitable for all patients [6,7,8,9].
This research investigates public perceptions of and attitudes toward AI-enabled healthcare, including perceptions of current levels of comfort with several proposed/active applications of AI for health-related tasks. In our survey, we defined AI-enabled healthcare as the use of computers to imitate or simulate human intelligence related to patient administration, clinical decision support, patient monitoring, and healthcare interventions [10]. Using five common elements of patient-centered care in family medicine [11], this paper analyzed data from a web-based survey of 600 US-based adults from the State of Florida. Using quantitative analysis of multiple-choice questions and qualitative content analysis of open-ended responses, this study sheds light on the role of decision self-efficacy in AI-enabled healthcare and highlights potential concerns and opportunities for incorporating AI in patient-centered care from the viewpoints of public respondents.
Elements of patient-centered care
Patient-centered care has become an integral part of the American healthcare system. Focusing on the ethical and moral implications of involving patients within the healthcare process on their own terms [1], the implementation of patient-centered care has involved a paradigm shift for doctors and medical professionals to be more empathetic and collaborative with patients [1, 3, 12, 13]. While definitions of patient-centered care can vary significantly across medical professions and healthcare contexts [11, 12, 14], there are often core elements that are considered integral parts of patient-centered care. According to Mead and Bower [14], these elements can include the therapeutic alliance, doctor-as-person, shared power and responsibility, patient-as-person, and the biopsychosocial perspective (i.e., the whole person).
The therapeutic alliance, doctor-as-person, and shared power and responsibility focus on the patient-physician relationship. Within patient-centered care, doctors place value on the quality of their relationships with patients and believe that this relationship can influence medical adherence and self-efficacy (i.e., therapeutic alliance) [14,15,16]. Patients are considered experts in their experiences and are provided with all the information necessary to make informed decisions regarding their care [3, 14, 17]. Therefore, doctors consider patients to be equal partners in the decisions that affect their care (i.e., shared power) and focus on not only providing information to patients but try to do so in a way that is respectful, empathetic, caring, and sensitive to the experiences, beliefs, and concerns of patients (i.e., doctor-as-person) [14, 18].
Patient-as-person and the biopsychosocial perspective consider the personal meanings associated with symptoms, illnesses, and potential interventions (i.e., patient-as-person) and how these individual interpretations can interact with the biological, psychological, and social environments in which they occur (e.g., culture, the economy, etc.) [14]. Thus, an important consideration for doctors within patient-centered care is to take into account the values, beliefs, and preferences of patients and to design medical care around these preferences [3, 19]. While some scholars note patient preference and ability to remain actively involved in care varies [19], others highlight the important role of actively engaging patients to learn about their preferences and empower them to participate in healthcare decisions (i.e., increase levels of self-efficacy) [15, 16, 20]. While patient-centered care has been a staple in evaluating the quality of care for the last decade [1], there are uncertainties about how AI will impact patient-centered care and, more importantly, a lack of understanding regarding how patients feel about the use of AI in their own care.
Advancements in the use of AI in healthcare
AI has been called the ‘fourth industrial revolution’ [6, pg. 1] and is anticipated to bring a new frontier for the medical community [21]. Deep learning networks and machine learning algorithms can use data from medical records, clinical registries, and medical journals to anticipate potential patient outcomes [3, 22, 23]. While acknowledging the technical limitations of these tools, many have suggested that AI-enabled healthcare may help to increase equity in health outcomes, reduce diagnostic errors, improve treatment protocols, and even offset increasing labor shortages among health practitioners [10, 24,25,26,27]. Some scholars even suggest that AI can improve patient autonomy and self-efficacy by providing patients access to their data [28] or even suggesting treatment options that patients with similar diagnoses made [29].
While these potential outcomes are impressive, the rapid development of this technology is set to outpace physician knowledge in the near future and may even displace the work of some medical professionals [23]. While such advancements could improve the accuracy of diagnoses, they also raise ethical concerns for physicians and implementation concerns for patients. For physicians, advancing technology could bring ethical requirements to consult AI before making decisions [3], thereby limiting professional autonomy. It may also create barriers to patient-physician relationships as physicians may have to explain the rationale behind a diagnosis that they may not fully understand or make themselves [3].
For patients, some scholars have found that there are concerns about how AI is designed and whether these systems are trustworthy [7, 30]. For example, a study by Hallowell and colleagues [30] on the use of AI to diagnose rare diseases found that patients were concerned about the accuracy of these tools and stressed the importance of using AI within a trusted patient-doctor relationship. Another study by Dlugatch and colleagues [7] found that within a labor and delivery setting, birth mothers were concerned about the potential for bias within this technology, raising concerns over representativeness and private AI developers. Studies conducted outside of the U.S. context have also found that trust/acceptance in AI may be higher in some specializations (such as dermatology) than others (i.e., radiology and surgery) [31]. Moreover, evidence has suggested that patients are wary of AI insofar as they perceive it to threaten personal interactions with human practitioners [32]. Although limited to specific medical settings, these studies underscore conversations within the medical community about how to design ethical AI systems that can account for bias and the potential motives of developers while also integrating patients’ values [6, 7, 30, 33].
Despite emerging research on this topic, ethical guidelines and regulation of AI in medical settings have lagged behind advancing technology [21, 34], raising concerns within the medical community on how AI should be implemented in practice. This concern was highlighted by the former WHO Director-General, who stated, “As so often happens, the speed of technological advances has outpaced our ability to reflect these advances in sound public policies and address a number of ethical dilemmas” [21, para.4]. Moreover, perspectives from key stakeholders, including public preferences on this technology, are often missing from these conversations [6, 7]. Without an understanding of public attitudes and acceptance of AI, real-life implementation may face challenges and threaten patient outcomes. This was noted by Yakar and colleagues [31] who found that little attention was paid to the public toward the deployment of “these systems into the practice of patient care” (p. 374). A more recent study conducted by the Pew Research Center (PRC) in February of 2023 [35] found that most Americans report “significant discomfort… with the idea of AI being used in their own health care”. However, this study focused only on a generic and limited range of proposed AI applications.
In this study, we seek to build on work done by Pew [35] and others in order to better understand Americans’ perceptions of and attitudes toward AI-enabled healthcare, including their current levels of comfort with several proposed/active applications of AI for health-related tasks. We report results from a sample of 600 U.S.-based adults using an exploratory, mixed-methods approach. The results are discussed below in the context of patient-centered care in the hopes that more patient and public concerns regarding this technology are incorporated into medical standards.