Which AI Doctor Would You Like to See?

Our Research Exploration

Our project investigates how large language models (LLMs) might be adapted to communicate using different healthcare communication styles, potentially giving patients more options in how they receive medical information.

It has led to a Journal of Medical Ethics open-access publication titled: Which AI doctor would you like to see? Emulating healthcare provider–patient communication models with GPT-4: proof-of-concept and ethical exploration
 

Explore Different AI Communication Models

Experience these communication approaches for yourself:

How We Approached This

Our project examines how GPT-4 can be instructed to communicate using the four physician-patient interaction styles described by Emanuel and Emanuel (1992):

  1. Paternalistic: Provides decisive recommendations based on medical expertise
  2. Informative: Offers comprehensive information about conditions and options without recommendations
  3. Interpretive: Helps patients explore personal values as they relate to treatment decisions
  4. Deliberative: Engages in discussion about health values and treatment options

Initial Testing: Breast Cancer Scenario

We tested these communication approaches using a scenario from Emanuel and Emanuel’s original paper involving a 43-year-old woman diagnosed with breast cancer who was presented with treatment options. We instructed GPT-4 to simulated dialogues between such a patient and a doctor who is communicating using each of the four communication approaches.

Read the full academic paper here: Which AI doctor would you like to see? Emulating healthcare provider–patient communication models with GPT-4: proof-of-concept and ethical exploration

The full dialogues and prompts are in the paper’s supplementary materials [link to follow].

Note: These are simulated dialogues generated by GPT-4 based on our instructions to emulate different communication styles. They represent potential approaches rather than actual patient-provider interactions.

Potential Implications

This research raises several questions about how AI might influence healthcare communication:

  • Could patient choice in communication style enhance autonomy in medical decision-making?
  • Might AI models offer more consistent application of different communication approaches?
  • How could these tools complement time-limited clinical encounters?
  • What role might these systems play in patient education and medical literacy?
  • What novel communication approaches beyond traditional approaches might LLMs enable that aren't feasible in conventional provider-patient interactions?

Questions for Future Research

Our work represents an initial exploration that raises numerous questions requiring further investigation:

  • How might AI communication models influence patient decision-making in real clinical contexts?
  • Could these models inadvertently reinforce patients' existing biases or or create "decision confirmation loops" where patients only receive information that aligns with their preexisting treatment preferences?
  • What safeguards would be needed to prevent potential manipulation of patient decisions?
  • What empirical evidence should we gather to assess benefits and risks?

Research Team

Hazem Zohny
Jemima Allen
Dominic Wilkinson
Julian Savulescu