ֱ

Forget AI: What Do You Think, Doc?

— Patients don't seem satisfied with responses written by artificial intelligence

MedpageToday
A photo of a mature male physician speaking to his female patient in his office.
  • author['full_name']

    Fred Pelzman is an associate professor of medicine at Weill Cornell, and has been a practicing internist for nearly 30 years. He is medical director of Weill Cornell Internal Medicine Associates.

"But what do you think I should do, Dr. Pelzman?"

Just a few weeks ago, I wrote about experimenting using an AI chatbot to generate a response to a patient question coming through the portal, particularly for something that we've all been getting a lot of lately. The particular question that is much on the minds of our patients is, "Should we take the new RSV vaccine?"

As I described, using one of the online AI systems, I generated the question, gave it a few qualifiers, and then refined it a few times and modified the tone and reading level, and finally came up with what I thought was a fairly well-reasoned and coherent argument telling patients the pros and cons of getting an RSV vaccine.

I tried sending it out over the next week or so whenever patients sent me this question in the portal (and believe me, we've all been getting dozens of these a day), each time with a qualifier that I was presenting them with something that had been written by an artificial intelligence system after I posed the question, and then edited by me and approved by me as providing what I thought was going to be fairly sound medical advice for them.

A few patients said, "Thank you very much." But the vast majority responded with another question -- the one I posed above. Essentially, "Thanks for this interesting discourse; it seems really well thought out, but actually I want to know what you think I should do, Dr. Pelzman."

The response that the AI system had generated was indeed well thought out, included a lot of very well-explained descriptions of the reason for and against getting a new vaccine, what it has been shown to help with, some of the risks and unknowns, and why it might be beneficial for certain people, such as those with underlying lung disease, immunosuppression, or advanced age. I guess what was missing was the personal touch.

This seems to be what's missing every time we try and use some fancy new advanced technology to make our lives -- or the lives of our patients -- better. The hesitation, concerns, and suspicions are on both sides, both from the patients who are being given medical advice or having their tests (such as an x-ray or a mammogram) interpreted, and from the providers who are having the machines help offload some of the work in our overwhelmingly busy clinical lives.

We providers have learned to be suspicious of folks who tell us they've invented some newfangled gizmo that's going to make our lives better, and we like to see the data. We want to see large randomized controlled trials (or at least a metanalysis), proof of concept, assurance that nothing bad can happen to our patients as we relinquish control to a computer. We've never seen that go badly, now have we?

And clearly our patients feel that some cold-blooded machine, made up of a lot of ones and zeroes and electrical circuits, and black boxes crunching data when no one really even knows what they're doing in there, can ever fully replace the advice of their doctor, their nurse, their trusted healthcare provider. Sure, our relationships with our patients are probably not as close and cozy as they were in the days of , or the old neighborhood family practitioner who saw patients in his home, in the back room tucked in behind the dining room.

But after we've spent months and years together, building a healthcare relationship, we get to know the way our patients like their healthcare, what they like to be told about, the level of involvement, the amount of detail, the amount of personalization. Losing this puts us at moral risk, even if it doesn't put our patients at an increased medical risk.

Sure, we can teach the system to read a mammogram, give it all the rules and exceptions and guardrails it needs to safely detect 100% of the cancers that are there, maybe even do it better than we can, and do it 24/7 and never get tired. But do the patients want to get a message from that machine, telling them that they have cancer?

At this point, skepticism runs so deep that our patients are going to insist that in any system that functions autonomously, they're ultimately going to want us, the doctors, the human beings in the process, to lay final eyes and make the final decision. I can understand this; we know that the folks who have created these systems release them for general use at the point where they feel they are ready for prime time, rarely asking those of us working as the actual end users.

How many times have you used one of those virtual assistants on a telephone answering system, with your insurance company or a retail sales site, and ended up frustrated that no matter what you said, it didn't really understand what you wanted, and you end up mashing on the buttons on your phone and speaking loudly and slowly saying "SPEAK TO A LIVE REPRESENTATIVE" over and over again?

I really do believe that this new frontier of using artificial intelligence in healthcare holds great promise, for increasing efficiency and helping us prevent medical errors. But as always, we need to proceed with caution, with a little bit of suspicion, and take it all with a grain of salt.

Trust but verify. And then I'll tell you what I think.