What do patients think about AI being used in their healthcare?

As AI researchers and health systems strive to implement AI tools into clinical practice in a safe and ethical way, our research team brought this conversation to the most important stakeholders: the patients.

Like Comment
Read the paper

Over the past few years, the idea of AI in medicine has grown from a fantastical image of a robo-doctor to a core component of most academic medical centers’ strategic plans. This perfect storm of ultra-powerful data processing technology and the vast amounts of existing healthcare data has spurred major partnerships between health institutions and industry partners and attracted significant excitement from physicians and data scientists alike. However, accompanying the enthusiasm has been a healthy amount of conversation about some of the ethical implications for the role of the physician, patient privacy, and data bias among others. To answer the question “what is ethical healthcare AI and how do we implement it?” we need not only thought leaders but corresponding empirical ethics research. In our paper, Patient apprehensions about the use of artificial intelligence in healthcare, we present results from a broad patient engagement research study to help understand patient perspectives and concerns crucial to ethical AI development and implementation. Our study design is operating from the belief that health systems have a moral imperative to respect the concerns of the patients they are serving, and the truth that patient-engagement is a crucial step in successful and ethical implementation of new healthcare technologies.

Our research team chose a focus group methodology to capture not only the perspectives of individual patients, but the way that patients discuss these technologies, change or defend their opinions, and make assumptions about other members of the focus group. Because we wanted make these topics more approachable, we designed 6 case examples of diverse ways AI may be used in healthcare to anchor our discussions. While we had a few suspicions about what patients would say, the lack of existing patient engagement research meant we were going into these focus groups with very little idea of how patients would understand or respond to the questions we posed. Through being a part of these conversations with patient groups in several locations, we were consistently humbled and impressed with the thoughtfulness and reflection that patients brought to these discussions. They shared vulnerable stories of being both hurt and saved by healthcare institutions and challenged many of the ideas we had as a bioethics research group about the role and complications of AI in healthcare.

Overall, we found that patients were very excited about the potential for AI technologies to improve healthcare, and saw a role for AI in research, direct patient care, and systems level applications. However, this enthusiasm was tempered by several key concerns that patients had about ways that this technology could harm them or those they care about. These concerns were:

  • Would clinicians be appropriately empowered to help prevent possible harms of AI (including mistakes or malfunctions) and the patients?
  • Would patients know when AI was used and have the ability to choose either to allow the AI in their care or to disagree with or go against the AI’s recommendations?
  • Would AI make healthcare even more expensive through either research and development costs or changing the way insurance reimbursement works?
  • Is the data that is being used to design these AI tools of sufficiently high quality, and is it broad and diverse enough to help a variety of patient populations?
  • What are the inherent risks associated with healthcare becoming more dependent on high-level technology? Are we setting ourselves up for massive problems if the system crashes or is hacked?

One thing we found particularly interesting about the way patients approached the evaluation of AI in healthcare was that it was significantly different from the academic bioethics’ thinkers. The academic literature has thus far framed ethical issues of healthcare AI in abstract or systems-level contexts. Patients, on the other hand, took the abstract and futuristic idea of healthcare AI and contextualized in their lived reality and how they felt this technology could directly harm them or other patients. This difference reiterates the importance of patient engagement research in that it introduces new perspectives and frameworks that would otherwise be missing from the conversation.

Another interesting aspect of our results was that some of them aligned with recurring ethical concerns from conversations about other emerging health technologies, such as apprehensions about discrimination and cost. Others, however, were new and specific to AI, such as negotiating the role of the physician when an AI can do traditional aspects of their job as well (or better) as they can. These insights changed the way our team thought about what makes (and doesn’t make) AI exceptional and how we discussed the way AI could change healthcare.

In sum, this study gave our research team the opportunity to have fascinating conversations with our participants and each other about what the expectations are of a healthcare interaction or institution, and how AI may ultimately change these. We are very excited to open this discussion to other colleagues through this first paper from our focus group data set, and hope to engage in conversation about the themes reported here and the role of patient engagement in healthcare AI more broadly. We believe that engaging patients as early and deeply as possible is crucial in ensuring that the AI tools being developed are both ethical and successful, and look forward to a future of rich empirical bioethics research that accompanies the exciting technological developments of AI in healthcare.

Jordan P Richardson

MD Candidate, Mayo Clinic Alix School of Medicine