ChatGPT adds mental health guardrails after bot 'fell short in recognizing signs of delusion'

NBC News Clone summarizes the latest on: Chatgpt Adds Mental Health Guardrails Openai Announces Rcna222999 - Technology and Innovation | NBC News Clone. This article is rewritten and presented in a simplified tone for a better reader experience.

OpenAI said its popular tool will soon shy away from giving direct advice about personal challenges.
Get more newsChatgpt Adds Mental Health Guardrails Openai Announces Rcna222999 - Technology and Innovation | NBC News Cloneon

OpenAI wants ChatGPT to stop enabling its users’ unhealthy behaviors.

Starting Monday, the popular chatbot app will prompt users to take breaks from lengthy conversations. The tool will also soon shy away from giving direct advice about personal challenges, instead aiming to help users decide for themselves by asking questions or weighing pros and cons.

“There have been instances where our 4o model fell short in recognizing signs of delusion or emotional dependency,” OpenAI wrote in an announcement. “While rare, we’re continuing to improve our models and are developing tools to better detect signs of mental or emotional distress so ChatGPT can respond appropriately and point people to evidence-based resources when needed.”

A person uses ChatGPT on their phone.
A person uses ChatGPT on their phone.Mariia Artemova / Alamy

The updates appear to be a continuation of OpenAI’s attempt to keep users, particularly those who view ChatGPT as a therapist or a friend, from becoming too reliant on the emotionally validating responses ChatGPT has gained a reputation for.

A helpful ChatGPT conversation, according to OpenAI, would look like practice scenarios for a tough conversation, a “tailored pep talk” or suggesting questions to ask an expert.

Earlier this year, the AI giant rolled back an update to GPT-4o that made the bot so overly agreeable that it stirred mockery and concern online. Users shared conversations in which GPT-4o, in one instance, praised them for believing their family was responsible for “radio signals coming in through the walls” and, in another instance, endorsed and gave instructions for terrorism.

These behaviors led OpenAI to announce in April that it revised its training techniques to “explicitly steer the model away from sycophancy” or flattery.

Now, OpenAI says it has engaged experts to help ChatGPT respond more appropriately in sensitive situations, such as when a user is showing signs of mental or emotional distress.

The company wrote in its blog post that it worked with more than 90 physicians across dozens of countries to craft custom rubrics for “evaluating complex, multi-turn conversations.” It’s also seeking feedback from researchers and clinicians who, according to the post, are helping to refine evaluation methods and stress-test safeguards for ChatGPT.

And the company is forming an advisory group made up of experts in mental health, youth development and human-computer interaction. More information will be released as the work progresses, OpenAI wrote.

In a recent interview with podcaster Theo Von, OpenAI CEO Sam Altman expressed some concern over people using ChatGPT as a therapist or life coach.

He said legal confidentiality protections between doctors and their patients or between lawyers and their clients don’t apply the same way to chatbots.

“So if you go talk to ChatGPT about your most sensitive stuff and then there’s a lawsuit or whatever, we could be required to produce that. And I think that’s very screwed up,” Altman said. “I think we should have the same concept of privacy for your conversations with AI that we do with a therapist or whatever. And no one had to think about that even a year ago.”

The updates come during a buzzy time for ChatGPT: It just rolled out an agent mode, which can complete online tasks like making an appointment or summarizing an email inbox, and many online are now speculating about the highly anticipated release of GPT-5. Head of ChatGPT Nick Turley said Monday that the AI model is on track to reach 700 million weekly active users this week.

As OpenAI continues to jockey in the global race for AI dominance, the company noted that less time spent in ChatGPT could actually be a sign that its product did its job.

“Instead of measuring success by time spent or clicks, we care more about whether you leave the product having done what you came for,” OpenAI wrote. “We also pay attention to whether you return daily, weekly, or monthly, because that shows ChatGPT is useful enough to come back to.”

×
AdBlock Detected!
Please disable it to support our content.

Related Articles

Donald Trump Presidency Updates - Politics and Government | NBC News Clone | Inflation Rates 2025 Analysis - Business and Economy | NBC News Clone | Latest Vaccine Developments - Health and Medicine | NBC News Clone | Ukraine Russia Conflict Updates - World News | NBC News Clone | Openai Chatgpt News - Technology and Innovation | NBC News Clone | 2024 Paris Games Highlights - Sports and Recreation | NBC News Clone | Extreme Weather Events - Weather and Climate | NBC News Clone | Hollywood Updates - Entertainment and Celebrity | NBC News Clone | Government Transparency - Investigations and Analysis | NBC News Clone | Community Stories - Local News and Communities | NBC News Clone