ֱ

AI: Could It Answer Patient Emails, or (Legitimately) Excuse You From Jury Duty?

— Answering a jury summons gave me time to think about how artificial intelligence could help us

MedpageToday
A photo of the New York County Supreme Courthouse in Manhattan, New York.
  • author['full_name']

    Fred Pelzman is an associate professor of medicine at Weill Cornell, and has been a practicing internist for nearly 30 years. He is medical director of Weill Cornell Internal Medicine Associates.

I spent a great part of this past week ensconced in a waiting room in the New York County Supreme Court building off Foley Square in Manhattan.

Not a place where anyone really wants to spend a lot of time, although I guess a lot of lawyers and judges are there by their own choosing. For us general citizens, the arrival of that thick red-and-black-and-white envelope with the perforated edges that contains our summons to appear as potential jurors arrives every 4 or 5 years, accompanied by a sinking feeling in our stomachs. Sure, you can postpone it a couple of times, but eventually for almost all of us, you've ultimately got to go downtown and dip into that pool.

Many years ago, the court system took away the "general medical" excuse, when patients would say to us, "I don't want to serve on jury duty because (insert reason here)," and we would write a generic letter that said that they didn't have to. But the courts have gotten much stricter, so now most people -- even despite sometimes significant chronic medical problems -- are required to serve.

The Long Wait

Several hundred of us arrived early in the morning, and we queued up outside on the street, then queued up on the top of the steps, then queued up again for the metal detectors, then filed into an enormous waiting room on the 4th floor. Then it was more paperwork to fill out, forms and questionnaires, videos to watch on the role of being a juror, more rules and background information, and even an actually informative video about checking your implicit bias.

But then there was a lot of waiting. We were waiting for the lawyers and judges to get ready for us; eventually they were ready to call a panel of people to voir dire -- to decide if we were suited for the particular case they were planning on trying. In groups of 30 to 60 people we were called down to a courtroom, once again lined up in the hallway outside, then seated in the jury box and the overflow chairs usually reserved for the public and the press when a case is being tried.

Then there was a lot more waiting while the lawyers read over our information forms and culled through the potential jurors, and then there was hour after hour of questioning. And after all of that, I ended up not being put on a jury.

As you know, I've been thinking a lot lately about how new technologies like artificial intelligence can become incorporated into healthcare to improve the lives of patients and doctors. It seems that everywhere, people are trying out new things, new tools to help decrease administrative burden, deal with insurance companies, handle routine messages, and manage a lot of the background nuts and bolts of things that aren't really medical that overwhelm our lives.

I know there's a lot of resistance out there, and that many are afraid that we're going to be replaced, when a good chunk of our jobs may ultimately be superfluous, when a machine does it better than we ever could. Comparing and contrasting the judicial process, it feels like much of what we went through down at the courthouse, waiting to be picked, could in fact be handled in a much better way, without the literally thousands of hours of human time spent sitting and waiting and listening and doing not much of anything.

I'm sure that lawyers and judges are just as resistant as we doctors are to being replaced by artificial intelligence, and I'm sure I don't know anywhere near enough about what they do in their jobs, the fine points of the art of the legal profession, to know whether this is ever going to be humanly possible. But it feels like so many of the hours they spent collecting information from us, and then the hours we waited around, could have been better served, and the endless questioning could have been handled by someone (something?) other than a lawyer making $600 an hour. I'm pretty sure we're a long way from a jury of your peers being a computer that listens to the evidence and interprets the law, but I wonder if there is a system out there that could more effectively and efficiently weed out those who may not be coming to that jury room with an open mind.

In the past, I have been recused during the voir dire process because of my profession, and once apparently because of my hobbies, and often because of the fact that I have served in the past as an expert witness for the district attorney on medical cases, and several times when a particular case I was being considered for had a medical aspect to it. Perhaps much of the screening process could be set up to eliminate these things right off the bat.

Making Messaging Easier

In a recent article in the New York Times (""), the author wrote about a system that is providing answers to patients' portal messages to alleviate the burden of the onslaught of messages filling our in-baskets every day, that is leading to burnout and dissatisfaction among physicians.

In another article in the New York Times this weekend (""), a physician wrote that artificial intelligence systems' responses were more empathic than he was, that it was able to follow the formula we use for things like breaking bad news, to create something that would be just as good as us at telling a patient that they have cancer, or some other serious life-threatening illness. Most of us, either as doctors or as patients, are probably unwilling to have the latter task turned over to a machine, and only partially ready to relinquish our daily messaging duties.

While both of these areas hold promise, and I think we should be able to incorporate much of this into our professional lives, I don't think any of us are ready to release to a computer the full responsibility of taking care of patients -- just as none of us would want, at this point in time, to be tried and convicted by a computer. But my hope is that as these systems become more fully fleshed out, that we, the doctors and nurses and others in healthcare who are on the front lines taking care of patients, are an integral part of the design and refining of what happens when a computer interacts with a patient.

I've always said I never want a computer answering my messages and talking to my patients, but I think it's a great idea to have a computer take a first pass at a response, and then let me edit it and revise it and put it in my own voice. And I hope that the designers of these systems will use this feedback in an active way, to make the next response to a message better.

AI's Ability to Learn

Right now, I can open up one of these AI tools on my desktop, and ask it to rewrite the opening act of Romeo and Juliet by William Shakespeare in the style of Eminem, and while it may sit there and churn for a few seconds, eventually something not that far off and not that bad will come scrolling back out.

Hopefully, as these digital assistants get better, they can learn from our past writing, our past responses, the style we've written our medical record notes in for the past 10, 20, 30 years, to more accurately reflect who we are and how we think and communicate.

I don't think you can ever completely embody the essence of what it means to be a doctor in a computer program, just as these programs are unlikely to completely replace a lawyer or a judge. But if we're willing to let these programs work alongside us, to grow and evolve, and to listen and get feedback, then maybe they have a place in our lives.