ֱ

Artificial Intelligence in Psychiatry Has Promise and Peril

— It may help in assessing things like suicide risk, but its use raises informed consent issues

MedpageToday

Artificial intelligence (AI) has great potential for forensic psychiatry but can also bring moral hazard, said Richard Cockerill, MD, assistant professor of psychiatry at Northwestern University Feinberg School of Medicine in Chicago, Saturday at the American Academy of Psychiatry and the Law annual meeting.

He defined AI as computer algorithms that can be used in specific tasks. There are two types of AI, Cockerill explained. The first type, "machine learning," involves having a computer use algorithms to perform tasks that were previously only done by humans. The second type, "deep learning," is when the computer -- using what it has learned previously -- trains itself to improve algorithms on its own, with little or no human supervision.

In a involving 25,000 patients in the U.S. and the U.K., Scott McKinney, of Google Health in Palo Alto, California, and colleagues used a deep learning model to train a computer to use an algorithm to recognize breast cancer on mammograms. The computer "didn't have any sort of preset ideas about what breast cancer is or isn't, but it just did millions of iterations of repeating these images," Cockerill said. "In this study, the algorithm eventually was able to comfortably outperform several human radiologists who were the comparators in the U.S. and the U.K. samples," with absolute reduction of 5.7% and 1.2% (U.S. and U.K., respectively) in false positives and 9.4% and 2.7% in false negatives.

"In this study, the AI was simply better at doing this task than the human comparators, and I think this really drills home what the power of this technology is already," he said. "And keeping in mind this process of ongoing, continuous self-improvement that these algorithms go through, you can project out 10 years from now ... where we might think these breast cancer algorithms will be. So I think that that really sets the stage to start looking more specifically in cases that might have more relevance for psychiatry."

One example of that is a 2020 study from Stanford University in California, in which the researchers employed electronic health records (EHR) as a way to train a computer -- via deep learning -- to develop an early warning system for suicide risk among patients who had been hospitalized at one of three hospitals in a particular California health system.

The computer "was trained on older data to look and say, 'Okay, I'm going to build my own model for who the highest- and lowest-risk patients are,' and so it was able to build that out into these four groups that were risk-stratified," said Cockerill. "At the end of the study they looked back, and the highest-risk group had a relative risk of suicide of 59 times that of the lowest-risk group, which is a significantly better performance than we're used to with some of the more traditional suicide risk assessments." In addition, more than 10% of those in the highest-risk group did actually attempt suicide, he said.

As with the breast cancer study, the computer "doesn't know what suicide is; it doesn't know specific risk factors or the history of how those risk factors have been studied, but still is able to build these models based on looking at a subset of patient data, and then looking forward to those models were quite reliable predictors of suicide," Cockerill said.

by Vincent Menger, PhD, of Utrecht University in the Netherlands, and colleagues, had computers use deep learning to analyze the EHR of 2,209 inpatients at two psychiatric institutions in the Netherlands, and develop models to predict the risk of violence on the inpatient units. "In this case it was a numerical score rather than just risk-stratifying the patient," and the models were supposed to focus on violence that was relatively imminent, he said. "The area under the curve for these two algorithms ... were about 0.8 and 0.76, respectively, so again, pretty good validity here, and this is a pretty small dataset."

One of the other ways that AI is being used is called "predictive policing," in which police "target either neighborhoods or even certain individuals that are deemed to be at higher risk of recidivism" and try to intervene before anything happens -- reminiscent of the 2002 movie "," Cockerill said, adding that there are "a lot of problems" with this idea. "You might think that you are deploying resources to a neighborhood and preventing crimes before they occur, when in reality, if you compare that to control [neighborhoods] where the technology isn't used, it may be that it actually increases arrests in those neighborhoods and increases incarceration rates."

The use of AI in psychiatry and law enforcement raises many ethical issues, he said. For example, how do you provide informed consent? "There's a huge knowledge differential" between providers and patients, and especially, "people who were in the criminal justice system might not even have the opportunity to understand fully what's going on, so I think this is a huge issue, and a difficult one," said Cockerill, adding that it also raises due process concerns, such as how someone could appeal a sentence if the main decision was made by an algorithm.

As it continues to evolve, initially AI "will be deployed as a tool in the forensic psychiatrist's toolbox" for things like suicide risk assessment, he said. "But I can't help thinking about ... What if, standing alone, it is a better predictor of suicide risk than clinical judgment or any other skills we have? Then we immediately run into this problem of, how do we overrule that as psychiatrists? And what are downstream hazards of that, both ethically and also legally? I imagine there will probably be some sort of legal challenges once these things are deployed."

  • author['full_name']

    Joyce Frieden oversees ֱ’s Washington coverage, including stories about Congress, the White House, the Supreme Court, healthcare trade associations, and federal agencies. She has 35 years of experience covering health policy.