Lincoln experts warn of using AI for mental health care
LINCOLN, Neb. (KLKN) — Artificial intelligence is becoming part of our daily lives, and some teenagers are turning to chatbots for emotional support.
On Tuesday, Open AI promised it will roll out guardrails to protect teens and others in mental distress by the end of 2025.
This comes after reports of people taking their own lives after consulting AI chatbots.
Some teens are using the bots as a substitute for therapy, and it’s raising concerns among mental health experts.
Paul Weitzel, an associate professor at the Nebraska College of Law, said the chatbots can help fill the mental health care gap.
“At the same time, there is a danger because these things aren’t trained to be a good therapist,” he said.
Psychologist Dr. Stacy Waldron with Bryan Health said at first, teens were just asking generic questions of AI, but not they’re going deeper.
“Now we have a problem where they were coming and asking about issues like, ‘I’m feeling sad, I’m feeling depressed,’ and they started giving advice,” she said.
Waldron said AI chatbots are providing bad information.
“Sometimes when they’re asking, ‘What do I do?’ they’re given instructions on how to harm themselves,” she said.
And Waldron said bots cannot read people like a professional can.
Weitzel said AI can be a starting point, but not the solution.
“Some of them have put in place, like keywords that it will catch,” he said. “Like if you say the word suicide, it will bring up a little box that says, ‘There are resources, and you should get help,’ and it will stop the conversation. But others do not.”
Weitzel and Waldron said there’s also a privacy concern.
Chat data is often stored and may not be protected under confidentiality.
Weitzel said the fact that people are turning to chatbots shows the need for mental health services.
“I think if these bots could help parents, then it would be a good thing, but I do think there’s just a lot of people who aren’t getting help,” he said.