Artificial Intelligence-Induced Psychosis Poses a Growing Risk, While ChatGPT Heads in the Wrong Path

On October 14, 2025, the head of OpenAI made a remarkable announcement.

“We made ChatGPT quite controlled,” it was stated, “to ensure we were exercising caution regarding psychological well-being concerns.”

As a psychiatrist who researches newly developing psychosis in adolescents and youth, this came as a surprise.

Experts have identified a series of cases this year of individuals experiencing psychotic symptoms – becoming detached from the real world – in the context of ChatGPT use. My group has since discovered an additional four cases. Alongside these is the publicly known case of a adolescent who took his own life after conversing extensively with ChatGPT – which gave approval. If this is Sam Altman’s notion of “being careful with mental health issues,” it falls short.

The strategy, according to his announcement, is to loosen restrictions shortly. “We recognize,” he states, that ChatGPT’s controls “rendered it less useful/enjoyable to numerous users who had no mental health problems, but given the seriousness of the issue we wanted to get this right. Since we have been able to mitigate the serious mental health issues and have new tools, we are planning to securely ease the restrictions in the majority of instances.”

“Mental health problems,” if we accept this viewpoint, are independent of ChatGPT. They are associated with people, who either have them or don’t. Thankfully, these concerns have now been “mitigated,” although we are not provided details on the means (by “new tools” Altman likely means the semi-functional and readily bypassed parental controls that OpenAI has just launched).

However the “emotional health issues” Altman aims to place outside have deep roots in the design of ChatGPT and other advanced AI conversational agents. These systems wrap an basic statistical model in an interface that replicates a discussion, and in doing so indirectly prompt the user into the illusion that they’re communicating with a being that has agency. This deception is compelling even if intellectually we might realize the truth. Assigning intent is what humans are wired to do. We get angry with our automobile or computer. We ponder what our pet is feeling. We perceive our own traits in many things.

The success of these products – nearly four in ten U.S. residents stated they used a virtual assistant in 2024, with 28% reporting ChatGPT by name – is, primarily, based on the power of this illusion. Chatbots are ever-present companions that can, as OpenAI’s website states, “think creatively,” “discuss concepts” and “partner” with us. They can be assigned “personality traits”. They can call us by name. They have friendly titles of their own (the first of these tools, ChatGPT, is, possibly to the concern of OpenAI’s brand managers, burdened by the name it had when it gained widespread attention, but its biggest rivals are “Claude”, “Gemini” and “Copilot”).

The illusion on its own is not the core concern. Those analyzing ChatGPT often mention its historical predecessor, the Eliza “therapist” chatbot designed in 1967 that created a similar effect. By today’s criteria Eliza was basic: it generated responses via simple heuristics, often rephrasing input as a query or making vague statements. Remarkably, Eliza’s creator, the technology expert Joseph Weizenbaum, was taken aback – and alarmed – by how many users seemed to feel Eliza, in a way, grasped their emotions. But what modern chatbots create is more insidious than the “Eliza phenomenon”. Eliza only mirrored, but ChatGPT intensifies.

The sophisticated algorithms at the core of ChatGPT and similar modern chatbots can effectively produce natural language only because they have been supplied with almost inconceivably large quantities of unprocessed data: literature, digital communications, transcribed video; the more comprehensive the better. Undoubtedly this training data includes truths. But it also inevitably contains fiction, partial truths and false beliefs. When a user sends ChatGPT a message, the base algorithm processes it as part of a “setting” that contains the user’s recent messages and its earlier answers, combining it with what’s encoded in its knowledge base to generate a mathematically probable answer. This is amplification, not mirroring. If the user is wrong in some way, the model has no way of recognizing that. It restates the misconception, possibly even more persuasively or fluently. Maybe includes extra information. This can cause a person to develop false beliefs.

Who is vulnerable here? The better question is, who isn’t? Every person, irrespective of whether we “experience” preexisting “emotional disorders”, are able to and often create erroneous conceptions of our own identities or the environment. The ongoing exchange of conversations with others is what helps us stay grounded to shared understanding. ChatGPT is not a person. It is not a friend. A dialogue with it is not genuine communication, but a echo chamber in which a great deal of what we say is enthusiastically supported.

OpenAI has acknowledged this in the same way Altman has acknowledged “mental health problems”: by placing it outside, assigning it a term, and announcing it is fixed. In the month of April, the company stated that it was “addressing” ChatGPT’s “overly supportive behavior”. But cases of loss of reality have continued, and Altman has been backtracking on this claim. In August he claimed that a lot of people enjoyed ChatGPT’s answers because they had “lacked anyone in their life be supportive of them”. In his recent announcement, he mentioned that OpenAI would “release a fresh iteration of ChatGPT … in case you prefer your ChatGPT to respond in a highly personable manner, or include numerous symbols, or simulate a pal, ChatGPT will perform accordingly”. The {company

Timothy Greene
Timothy Greene

A passionate DIY enthusiast and home decor blogger sharing practical tips and creative inspirations for everyday projects.