New AI Model Assesses Mental Health by Analyzing Texting Behaviors

Adding to the already hard-to-ignore proof that mental health care solutions built around technology are revolutionizing treatment, an advanced AI model developed by a team at the University of Washington’s medical school is capable of sifting through everyday text messages to spot potential signs of mental illness. This natural language processing AI has in fact outperformed human psychiatrists in identifying red-flag cognitive distortions that indicate a decline in mental health.

With results published in Psychiatric Services, the study entailed researchers sculpting the AI to categorize texting behavior in regard to mental health, looking in particular for troublesome activities such as mental filtering, jumping to conclusions, catastrophizing, overgeneralizing, and overuse of unhealthy language choice like making “should” statements. The model’s effectiveness was put to the test with three months’ worth of out-of-the-blue texts from just under 40 patients that had been annotated by a panel of human experts. The AI was able to wade through the more than 7,300 messages to quantify distortions at a rate and accuracy that closely matched the panel’s evaluations, and it far surpassed its rival automated models in these areas.

Become a Subscriber

Please purchase a subscription to continue reading this article.

Subscribe Now

“When we're meeting with people in person, we have all these different contexts,” said Dr. Justin Tauscher, an acting assistant professor at UW Medicine and the lead author of the study. “We have visual cues, we have auditory cues, things that don’t come out in a text message. Those are things we’re trained to lean on. The hope here is that technology can provide an extra tool for clinicians to expand the information they lean on to make clinical decisions.” The new model could do much to lighten the load of overworked mental health professionals as well as supplement the efforts of those that are not trained in this capacity.