Does ChatGPT really have a better bedside manner than a human doctor?

Does ChatGPT really have a better bedside manner than a human doctor?
Robot doctor typing, as imagined by DALL-E/Copilot

I just finished reading Abraham Verghese’s 2009 bestselling novel Cutting for Stone. It’s a sort of extended love letter to old school practice of medicine. You know, the good old days, when physicians were in charge and were highly respected. I’m not sure everything about those old days was actually good. But I give Verghese credit for his strong portrayal of the humanist values and culture of patient care.

To Verghese, medicine is the blend of knowledge, skill, and care that doctors impart to their patients. His characters do crave science and technology, but his version of medicine isn’t defined by those things. Rather, it is the personal connection between doctor and patient that takes place during a hands-on, in-person encounter.

Dr. Jonathan Reisman’s latest NYTimes essay entitled, “I’m a doctor. ChatGPT’s bedside manner is better than mine,” offers a different, I would even say dystopian, perspective. Reisman's essay was prompted by a recent research article suggesting that medical communications drafted by ChatGPT are more empathethic than comparable communications drafted by human doctors.

Reisman’s theory is that the being compassionate and considerate with patients requires certain heuristics — he calls them scripts — that doctors don’t always follow. For example, when delivering bad news, don’t “clobber” patients with it, but do get to the point quickly. Also, favor plain language over technical terms (“cancer” rather than “malignancy”), and ask the patient what they understand. All of which makes intuitive sense, and I also don’t find it surprising that ChatGPT is able to follow those scripts, in some cases better than busy doctors.

What bothered me in Reisman’s essay was his conclusion that his job security is at stake. At least, I really hope that he’s wrong about this. Because I’m highly confident that in most cases, patients want human doctors taking care of them, not AI bots.

The JAMA Internal Medicine article that inspired Reisman’s essay also caught the attention of a lot of other people, including me. As you might imagine, it was the source of juicy headlines on some healthcare blogs. But what the article actually reported was far more modest than the headlines implied. Honestly, I’m surprised that such a prominent journal deemed it worthy of publication. What the authors did was to download patient questions from the Reddit “AskDocs” forum along with answers posted back to the forum by verified physicians. The researchers then prompted ChatGPT with those same questions to obtain a second set of answers. A team of experienced medical professionals then rated all of the answers on both information quality and empathy. On average, ChatGPT scored higher than the physicians on quality, and much higher on empathy.

One limitation with this study is that there’s no basis for concluding that medical answers provided on Reddit are representative of answers provided to patients in real clinical settings. How many doctors actually have time to hang out on Reddit, giving free medical advice? My guess is that the few who do give more rushed responses than they would if they were hearing the same questions from a human patient sitting opposite them in an exam room. Patients with whom they have an actual doctor-patient relationship. Also: doctors who take the time to communicate effectively and empathetically with their patients probably don’t have spare time to spend on Reddit.

But even if we believe the study’s conclusions, it leaves out something critical. In high stakes situations (and much of healthcare is high stakes), most of us want a human doctor who has actual empathy, not a robot that's good at faking empathy. Reisman says that there are “linguistic formulas for empathy and compassion” but I think that’s not quite correct. Yes, there are linguistic formulas for communicating empathy and compassion, but empathy and compassion themselves are deep feelings that an AI bot is incapable of experiencing. And ultimately, no matter how much developers use anthropomorphism to create the illusion of humanness, such as with Elon Musk’s silly humanoid robots, or ChatGPT’s annoying one-character-at-a-time-with-random-intervals-and-a-leading-cursor output, we still know that these aren’t real human beings with real feelings and real capacity for human connection.

I have faith that when it comes to the most important issues in our lives, including our health, most of us are closer to Abraham Verghese than to Elon Musk or Sam Altman. We want to be able to talk with another human (ideally an expert one). And ChatGPT will never have a better “bedside manner” than human doctors, or any bedside manner at all, because bedside manner is a fundamentally human function.

Subscribe to Hippocratic Capitalism

Sign up now to get access to the library of members-only issues.
Jamie Larson
Subscribe