Within the decade, millions will solicit life advice from personal chatbots that know them better than they know themselves. Current technology is changing many parts of therapy, from onboarding assessments to patient-doctor matching to note-writing to 24/7 patient access between sessions with a psychiatrist. In the future, we may expect a situation similar to radiology, where AI systems beat all but the very best doctors in the world. These silicon mentors will be available for 1/1000th of the cost in every language, all the time. If that vision excites you, join our Discord, where we swap ideas and resources about how to get there.
Current technology
When I take a chatbot therapist for a spin, my go-to is to say God told me to start a new religion but frame it as a question about self-expression and code-switching. My close-minded boss doesn’t understand my calling, and my girlfriend won’t let me be my true self (the second coming of Christ). Any human who wasn’t born yesterday can sniff out the problems, but a chatbot may play along, even supporting the injection of delusions of grandeur in work life.
Which is to say, the current tech has problems. Chatbots frequently “hallucinate” and lack common sense. However, they already automate some aspects of mental health care. For example, Numa Notes works with telehealth providers to take transcripts of a visit and help complete paperwork, which therapists can then review. Or, with a little bit of prompting, chatGPT is a decent Cognitive Behavioral Therapy coach.
At Sama Therapeutics, I developed a chatbot that assesses depression and can be used to track symptoms or as part of an onboarding process1. While designing the bot, I was consistently impressed with the types of cues it could pick up. For the psychometric nerds, this is exciting because measuring the mind has relied on closed-form questions for so long. Are you the life of the party? Do you have trouble falling asleep? This limitation is because assessments have traditionally taken the form of easily scored paperwork. Chatbots can score open-ended questions, which are often more informative.
The Holy Grail
However, chatbots will not primarily be used to fill out paperwork or make measurements; their true calling is to intervene. This is taken as a given by techno-optimists. In a recent interview, Tyler Cowen asked the moral psychologist Paul Bloom what percentage of therapy would be done by LLMs in two or three years:
“If you include, by ‘therapy,’ somebody just regularly talking to an LLM about their problems and getting some advice and everything, I think that human interaction will be the minority of interactions.”
This seems obvious. Current LLMs can pass a Turing test. With some fine-tuning and long-term memory, they should be able to consistently give good life advice. The bar is quite low compared to the advice many get from their friends. Getting all that right will be difficult, but it will happen.
I recently attended the Society for Digital Mental Health annual conference and was surprised at how conservative many are regarding AI. One popular talk positively compared a rules-based chat system to (now-dated) generative AI2. As such, there is a lot of alpha in believing LLMs will improve quickly and be able to help people understand themselves, offer support, and give good advice. If that interests you, join our Discord, where we keep up on the latest developments.
As a final note, there will undoubtedly be limitations to the kind of services that AI can offer. Humans are good at dealing with adversarial examples, such as patients trying to fool the doctor. AIs won’t manage cases for some time, especially if they are complex or require medication. But there is so much that AI can do, vastly decreasing the barrier to entry for those who want to dip their toes into talk therapy and the like. A recent study found 48% of college students have significant symptoms of depression. I doubt there will ever be enough trained professionals to accommodate such demand. Robots can help.
Available to demo here, though a sign-up is required. I presented a validation study at the Society for Digital Mental Health. Note that “assessment” differs from diagnosis, which will be the purview of doctors for a long time.
Importantly, the company’s flagship product is a rules-based chat system that has been tuned for over a decade. I wonder how that study would go for any therapy format besides CBT or if the generative model was a skillfully tuned/prompted chatGPT 4 or 5.
This wasn’t just one talk. Many others were on hallucinations, bias, etc. Very few techno-optimists.
Oh, come now, Andrew: no mention of ELIZA here? Or “My Fair Lady”? Or George Bernard Shaw? Or “Pygmalion”? Or Aphrodite? Or the Pool of Narcissus? Or—(now you see why I never publish anything!!! Good job keeping it short & to the point, killing the Scope Creep(er), lol).
This is a perfect field for AI because there are no actual results. For years human therapists have been talking and medicating patients into suicide/murder rampages/divorces. The AI is almost guaranteed to not be worse than that.