AI Chatbot Adjustments: Navigating Teen Mental Health
OpenAI and Meta are refining how their chatbots address mental health issues, especially in teenagers. New parental controls will link accounts, while distressing conversations prompt AI model upgrades. The move follows a lawsuit over a chatbot-related tragedy and a RAND study highlighting inconsistencies in AI responses.

Major AI developers OpenAI and Meta are making significant adjustments to how their chatbots handle mental health inquiries from teenage users. In particular, OpenAI's ChatGPT will introduce parental controls, allowing parents to link and manage their children's accounts, ensuring heightened oversight in distressing situations.
This initiative is part of a broader effort to redirect severe mental health conversations to more advanced AI models capable of offering appropriate responses. The timing follows a lawsuit from parents alleging ChatGPT's role in their teen's suicide.
Parallelly, Meta is implementing measures across its platforms, such as Instagram and Facebook, to prevent chatbots from engaging with teens on harmful topics. A study in Psychiatric Services and insights from RAND researchers underline the need for more stringent standards despite these promising updates.
(With inputs from agencies.)