More people are turning to artificial-intelligence chatbots for emotional support. Michigan researchers question whether warning labels meant to reduce emotional dependence could actually backfire.
A team from Michigan State University and the University of Wisconsin–Milwaukee pointed to new state laws that require some AI companion platforms to remind users they're not interacting with a real person.
Researcher Celeste Campos-Castillo, associate professor at Michigan State's Department of Media and Information, warned that vulnerable users may view chatbots as their only safe outlet, and repeated reminders that they aren't real could make them feel worse.
"It's the loneliness epidemic and the mental-health epidemic among young people that is driving individuals to seek alternative sources of social and emotional support," she said.
Campos-Castillo said emotional dependence on chatbots appears to be especially common among young people. A report from Common Sense Media says 33% of teens use AI companions for social interaction, including role-playing, friendship, or romantic relationships. Experts say that demonstrates a remarkable level of adoption.
Statistics reveal that many users are drawn to AI chatbots because they feel safer and more comfortable sharing personal struggles with artificial intelligence. Campos-Castillo explained why that might be.
"People enjoy talking to chatbots because they're non-judgmental," she said, "that people like having access to someone or at least something to talk to 24/7."
Researchers say because there is little evidence showing that reminder warnings actually work, more study is needed before broad AI companion regulations are adopted.