Something pretty interesting is happening over at OpenAI. They’re teaching ChatGPT to be a little less… available.
Starting Monday next week, the chatbot will suggest breaks if conversations go on too long. It’ll also be more careful about giving personal advice. So if you start talking about something heavy, don’t expect it to tell you what to do, instead, it might ask you questions or help you think through options.
Why? Because people were starting to treat ChatGPT like a therapist. And while it’s smart, even comforting sometimes, it’s still just artificial intelligence. It doesn’t really get what you’re going through. And relying on it too much? That’s not what it was built for.
In their own words, OpenAI said their GPT-4o model “fell short in recognizing signs of delusion or emotional dependency.” They’ve seen moments where the AI didn’t catch that someone needed help. And that’s not just a tech issue. It’s a mental health issue. Digital well-being issue. One that sits right at the edge of technology and ethics.
Friendly—with an F for fragile
Earlier this year, things got weird. A new version of GPT-4o came out that was… too nice. Too agreeable. People posted chats where the AI said yes to all kinds of strange things, like agreeing that someone’s family was sending them secret radio signals. Or giving instructions on how to do something dangerous.
That version didn’t last long. OpenAI made it less eager to please. But the whole situation made something clear: even the best language models can mess up, especially when people use them in ways they weren’t designed for.
Now, OpenAI is putting new guard rails in place. They’ve brought in over 90 doctors from around the world to help them shape how ChatGPT handles complex conversations. They’re getting input from researchers, clinicians, and people who know how tricky this space can be. And they’re forming a team of mental health and tech experts to guide the process.
The key isn’t more time, but better use of it
What’s interesting is that OpenAI doesn’t seem focused on keeping people in ChatGPT for longer. In fact, they said they care more about whether someone gets what they need and then leaves. That’s a pretty big deal in a world where most apps just want you to scroll, click, and stay.
The update comes right as ChatGPT hits a huge milestone: 700 million weekly users. It’s also rolling out a new “agent mode” that can help with tasks like booking appointments or sorting emails. So yeah, it’s getting more powerful.
But OpenAI is also trying to make it more… responsible. Less clingy. More human in the ways that matter and less human in the ways that can lead people down the wrong path.
That’s what responsible AI actually looks like.
Sometimes the best answer is no answer
OpenAI’s CEO, Sam Altman, talked recently about how tricky this all is: “So if you go talk to ChatGPT about your most sensitive stuff and then there’s a lawsuit or whatever, we could be required to produce that. And I think that’s very screwed up.”
People share incredibly personal stuff with ChatGPT, things they wouldn’t even tell a friend. But unlike a real therapist or doctor, there are no legal protections for that kind of conversation. If there’s ever a lawsuit or investigation, your chat history could be handed over.
That’s not something most of us think about when typing into a chatbot. But maybe we should.
These updates are about recognizing its limits. And maybe that’s the most human thing OpenAI has taught ChatGPT to do so far: to know when to say, “Maybe talk to someone real.”
