Home-Slide, News, Technology, Vendor

ChatGPT gets mental health guardrails

OpenAI has introduced a series of mental health guardrails to ChatGPT following concerns that, during extended use, the tool may have contributed to user delusions and emotional reliance.

The update includes reminders to take breaks during long sessions, less decisive responses to sensitive, high-stakes personal queries and improved detection of mental or emotional distress. In such cases, ChatGPT is being trained to respond with grounded, evidence-based guidance and refer users to appropriate resources when needed.

In a blog post, the company stated it is working with more than 90 physicians across more than 30 countries, as well as experts in psychiatry, youth development and human-computer interaction to ensure responsible development and deployment of the model.

OpenAI also acknowledged that an earlier update to its GPT-4o model made the system “too agreeable”, at times prioritising reassuring responses over helpful or accurate ones. The company has since rolled back the update, stating its new focus is on optimising ChatGPT to help users make progress and solve problems efficiently rather than maximising time spent on the platform. “Our goal isn’t to hold your attention, but to help you use it well”, said the ChatGPT-maker, explaining that its success metrics now focus on whether users accomplish their goals and return regularly, rather than clicks or duration of use.

The update comes as reports of amplified emotional distress when users engage with AI chatbots are on the rise; The Independent reported that in extreme cases, “dangerous or inappropriate” AI responses can escalate symptoms of psychosis or mania in users experiencing mental health crises.

Source: Mobile World Live

Image Credit: ChatGPT

Previous ArticleNext Article

GET TAHAWULTECH.COM IN YOUR INBOX

The free newsletter covering the top industry headlines