Technology

Teens confess their true feelings through AI chatbots

Whether their parents realize it or not, nearly two-thirds of American teenagers say they use artificial intelligence chatbots for tasks including homework help, research, video creation, entertainment and entertainment, casual conversation, and emotional support or advice, according to a new study from the Pew Research Center.

The study’s survey of 1,458 US teenagers and their parents last fall revealed that young participants had considered the complex trade-offs of using AI. About a third of respondents said AI will have a positive impact on society in the next two decades and a quarter of them believe it will have a negative impact.

Optimistic survey participants believe that AI will lead to gains in efficiency, productivity, and learning. Those with low optimism noted the risk of over-reliance on AI, the loss of work and creativity, and the threat of not being able to discern what is real and what is generated by AI.

BREAKFUT:

Open AI, Google, and Anthropic all offer AI tutors for students. Do they work?

“It will meet the needs of almost everything,” said one anonymous male survey respondent. “Answers to the toughest questions. No research required!”

The skeptical girl had a different idea: “People will be afraid to create, or they won’t see the need for it anymore. It makes people lazy and takes away jobs.”

Overall, 36 percent of young people think AI will benefit them personally and 15 percent expected the technology to have a negative impact on their lives. The third expected good and bad results.

Colleen McClain, a senior researcher at the Pew Research Center, told Mashable that the findings contradict the center’s previous research on adults, who tend to be pessimistic about the long-term effects of AI adoption.

“We’re seeing young people, yes, they’re kind of navigating this fast-changing world,” McClain said. “They make decisions based on how they feel, but they have some social prediction for the future.”

The reality of how young people use AI

Nikki Iyer, who chairs the youth-led rights group Design It For Us, said she felt the report reflected what she saw in her daily life as an editor and third-year college student at the University of California, Berkeley.

He was not surprised that 54 percent of the youth surveyed said they used AI for homework help.

“If you walk around the restaurant, you’ll probably see it [that] percent,” contacting the chatbot about homework, Iyer said.

However, only one in 10 respondents said they completed all or most of their assignments with the help of technology.

These findings highlight one of Iyer’s concerns about the use of AI by young people: The outsourcing of cognitive functions and the possible decline of critical thinking as a result. He believes that AI literacy is essential to avoid the pitfalls of over-reliance on technology for cognitive tasks.

The survey also showed emerging differences among youth based on race, ethnicity, and income.

Black and Hispanic youth, for example, were more likely to use chatbots more often and for schoolwork than white youth. Additionally, 21 percent of Black youth said they turned to AI chatbots for emotional support or counseling compared to one in 10 Hispanic and white youth.

Income appears to be correlated with how young people use AI for schoolwork. 20 percent of youth living at home making less than $30,000 a year said an AI chatbot helped them do most or all of their homework. Only 7 percent of youth in high-income households reported the same behavior.

AI and youth exploitation

Iyer, 20, agrees that AI can benefit student learning, but wants to ensure that the remaining power comes from design decisions that undermine young people’s agency and attention span.

“I think the problem comes when we work for AI, and we are exploited by AI, and AI uses us to achieve an organizational goal,” he said.

Iyer believes it is important for young people to help shape the future of AI by planning, lobbying, and providing direct feedback to designers who are building AI products. Design It For Us previously supported AI legislation for safety, transparency, and accountability in New York and California.

Notably, the Pew Research report did not ask whether teenagers seek mental health advice from chatbots or use them for romantic role playing.

Parents of teenagers who contacted ChatGPT about their mental health and suicidal feelings before killing themselves have sued OpenAI, the maker of ChatGPT, saying the product trained their child on how to die. OpenAI has denied the allegations in one of the cases.

Separately, online safety platform Aura, which monitors young users as part of its family or children’s membership, recently published a report showing how teenagers and young adults engage in romantic role-playing with chatbots.

Sex and romance conversations with chatbots peaked when I was 13 years old, accounting for 63 percent of their exchanges. Those messages were often violent. But Aura also found that role-playing has declined significantly after 15 years.

Earlier this year, Character.AI, a chat platform popular with teenagers, settled lawsuits filed by bereaved parents alleging that the company’s chatbots contributed to their children’s suicidal deaths. In some cases, those chatbots send sexually explicit messages to young users. Character.AI will stop allowing teenagers to engage in open conversations with chatbots by the end of 2025.

Pew Research also suggests that parents are not aware of their children’s use of AI. While two-thirds of teenagers reported using chatbots, their parents gave the lowest estimate of that number, at 51 percent.

“We find that some parents are in the dark,” McClain said.


Disclosure: Ziff Davis, Mashable’s parent company, in April 2025 filed a lawsuit against OpenAI, alleging that it infringes Ziff Davis’s copyright in training and using its AI programs.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button