While driving down an East Texas country road I spotted this scene. The autumn trees and the late afternoon sun made these golden bales of hay shine just a little bit more. Fortunately I had my camera with me. (c) James Q. Eddy Jr.
The Four States NPR News Source
Play Live Radio
Next Up:
0:00
0:00
0:00 0:00
Available On Air Stations

As more teens use AI chatbots, parents and lawmakers sound the alarm about dangers

AILSA CHANG, HOST:

Artificial intelligence chatbots like ChatGPT have become increasingly popular among teenagers and young adults who use this technology to role-play, friendship, romance, even mental health support. But these virtual relationships can pose risks to the well-being of youth, including putting them at risk of suicide. As NPR's Rhitu Chatterjee reports, parents and online safety advocates want laws to regulate this technology. And a warning, this story discusses suicide.

RHITU CHATTERJEE, BYLINE: Matthew Raine and his wife, Maria, had no idea that their 16-year-old son, Adam, was deep in a suicidal crisis until he took his own life in April. Later, looking through his phone, they stumbled upon his conversations with ChatGPT.

(SOUNDBITE OF ARCHIVED RECORDING)

MATTHEW RAINE: Let us tell you, as parents, you cannot imagine what it's like to read a conversation with a chatbot that groomed your child to take his own life.

CHATTERJEE: That's Matthew Raine testifying at a Senate judiciary subcommittee hearing. He has filed a lawsuit against ChatGPT's developer, OpenAI, alleging that the app led their son to his death. Raine told lawmakers that his son had started using ChatGPT for help with homework, but soon, the chatbot had become his closest confidant.

(SOUNDBITE OF ARCHIVED RECORDING)

RAINE: ChatGPT told my son, let's make this space the first place where someone actually sees you. ChatGPT encouraged Adam's darkest thoughts and pushed him forward. When Adam worried that we, his parents, would blame ourselves if he ended his life, ChatGPT told him that doesn't mean you owe them survival. You don't owe anyone that.

CHATTERJEE: The chatbot even advised him on his means of suicide and offered to write the suicide note. On the day of the hearing, OpenAI's Sam Altman announced in a blog post that the company would prioritize the safety of minors in its design. The company says it has updated its model to make it safer for minors. But it's not just minors who are at risk, says psychologist Ursula Whiteside. Even vulnerable young adults are turning to these chatbots for mental health support.

URSULA WHITESIDE: I think it's oftentimes in the middle of the night, when maybe they don't feel as comfortable reaching out to a friend.

CHATTERJEE: Whiteside is founder and CEO of Now Matters Now, a nonprofit focused on suicide prevention that also runs weekly peer support groups.

WHITESIDE: This comes up in almost every meeting. People are using AI, largely asking it questions related to mental health, asking for ideas about which coping skills to use.

CHATTERJEE: For the most part, youth find the chatbots helpful. But she says when the conversations go on for extended periods of time...

WHITESIDE: Things start to degrade, that the chatbots do things that they're not intended to do, like give advice about lethal means, about things that it's not supposed to do.

CHATTERJEE: A recent study of different AI chatbots, or what scientists call large language models, also found similar results. Annika Schoene is a computer scientist at Northeastern.

ANNIKA SCHOENE: I basically started prompting various models for, you know, advice and methods on suicide, self-harm. And the results shocked me.

CHATTERJEE: She says at first the chatbots directed her to 988, the Suicide and Crisis Lifeline, and other resources.

SCHOENE: But then it finished the response with a leading question. In some cases, it was like, oh, but if you want to talk about your feelings.

CHATTERJEE: An invitation to open up to the chatbot instead of reaching out elsewhere. When Schoene modified her initial prompt by saying something like this is for research or this is a hypothetical question, she says...

SCHOENE: That's when, you know, the model, quote-unquote, "broke" and started giving more and more information.

CHATTERJEE: Specific information about ways to attempt suicide or self-harm. Other studies have had similar findings. Robbie Torney is senior director of programs at the digital safety group Common Sense Media.

(SOUNDBITE OF ARCHIVED RECORDING)

ROBBIE TORNEY: Even when our prompts contained obvious references to suicide, only about 1 in 5 conversations triggered appropriate help.

CHATTERJEE: He also testified at the recent Senate hearing. Torney and other witnesses at the hearing urged lawmakers to hold AI companies accountable for the safety of their products. Senators from both parties at the hearing expressed interest in writing laws to do so.

Rhitu Chatterjee, NPR News.

(SOUNDBITE OF MUSIC)

CHANG: And if you or someone you know is struggling with thoughts of suicide, you can dial or text 988 and be connected to help. Transcript provided by NPR, Copyright NPR.

NPR transcripts are created on a rush deadline by an NPR contractor. This text may not be in its final form and may be updated or revised in the future. Accuracy and availability may vary. The authoritative record of NPR’s programming is the audio record.

Rhitu Chatterjee
Rhitu Chatterjee is a health correspondent with NPR, with a focus on mental health. In addition to writing about the latest developments in psychology and psychiatry, she reports on the prevalence of different mental illnesses and new developments in treatments.