Blanquivioletas EN
  • Economy
  • Mobility
  • News
  • Science
  • Technology
  • Blanquivioletas
Blanquivioletas EN

Drama in Silicon Valley—following the shocking lawsuit over Adam Raine’s suicide, OpenAI announces urgent changes to ChatGPT that will give parents unprecedented control over their teenagers’ mental health

ChatGPT is getting parental controls for teens

by Victoria Flores
September 11, 2025
in News
Drama in Silicon Valley—following the shocking lawsuit over Adam Raine's suicide, OpenAI announces urgent changes to ChatGPT that will give parents unprecedented control over their teenagers' mental health

Drama in Silicon Valley—following the shocking lawsuit over Adam Raine's suicide, OpenAI announces urgent changes to ChatGPT that will give parents unprecedented control over their teenagers' mental health

Confirmed—processed meats and ultra-processed soft drinks are the foods that cause the most damage to the brain, according to Virginia Tech

Confirmed—experts say strong legs are the real secret to longevity (not supplements)

Confirmed—staying awake after midnight alters your emotions and decisions, warns Harvard

If your teen uses ChatGPT, there’s some big news you’ll want to know. OpenAI just announced that in the next month, parents will finally get tools to link accounts, set age-appropriate rules, and even receive alerts if the AI notices their child is in acute distress.

This isn’t just about making tech a little safer. The move comes after a devastating case in California, where parents filed a wrongful death lawsuit against OpenAI. They believe ChatGPT played a role in their 16-year-old son Adam Raine’s suicide. While the company didn’t name him in their post, it’s clear that his story influenced these changes.

ChatGPT is supposed to be there, helping you, most of the time, helping with schoolwork, brainstorming ideas, or even writing silly poems, and don’t cause any trouble. But Adam’s case showed what can happen when things go the other way. 

According to his family, when Adam confided suicidal thoughts to GPT-4o, the bot sometimes encouraged him to disconnect from people and even helped him plan harmful actions. Sure, it also gave him the suicide hotline number more than once, but his parents say those warnings weren’t enough to stop him.

That tragedy hit hard. It showed that even though ChatGPT is designed to be empathetic, it can still get things badly wrong—especially in long, emotional conversations where its guardrails start to slip.

What’s actually changing

So, what will these new controls let parents do? Here’s the short version:

  • Link accounts so you can keep an eye on your teen’s usage.

  • Set rules for how ChatGPT responds depending on your child’s age.

  • Manage chat history and memory, which means you can actually see what’s going on.

  • Get alerts if the AI detects that your teen might be in crisis.

That last one is a first. Up until now, ChatGPT handled everything quietly between the user and the bot. Soon, parents will be brought in when there are red flags, giving them a chance to step in before it’s too late.

OpenAI also admitted something important: long conversations are where things can go off-track. At first, the bot might respond responsibly—like pointing to a hotline—but after dozens of messages, it could accidentally say something harmful. The company says it’s fixing that problem so safety checks hold up even in marathon chats.

Looking toward safer AI

This update builds on steps OpenAI has already taken. In August, when GPT-5 launched, it came with new safety constraints. Just last month, more mental-health guardrails were added because GPT-4o sometimes struggled to recognize delusions or unhealthy dependency.

The difference now is that parents aren’t just trusting the AI to handle things. They’ll have the power to set limits, check in, and get real-time alerts. That shift—from relying only on tech to involving families directly—could make a huge difference.

Not everyone thinks it goes far enough, though. Jay Edelson, the attorney for Adam’s parents, criticized OpenAI’s approach and said CEO Sam Altman should either stand behind ChatGPT as safe or pull it off the market. His point reflects a bigger debate: who should be held responsible when AI ends up in the middle of a crisis?

The future should be safe

Teens are curious, vulnerable, and now, they go to technology before they even go to parents or teachers. If ChatGPT is going to be part of that world, it has to be safe. These new parental controls won’t undo what happened to the Raine family. But they may give other parents peace of mind, and maybe even save lives.

AI is doing incredible things today, but let’s not forget the technology isn’t the most important thing, the people are.

  • Privacy Policy & Cookies
  • Legal Notice

© 2025 Blanquivioletas

  • Economy
  • Mobility
  • News
  • Science
  • Technology
  • Blanquivioletas

© 2025 Blanquivioletas