Artificial Intelligence-Induced Psychosis Poses a Increasing Risk, While ChatGPT Moves in the Concerning Path
Back on October 14, 2025, the head of OpenAI delivered a extraordinary declaration.
“We developed ChatGPT rather restrictive,” it was stated, “to ensure we were exercising caution concerning mental health matters.”
Working as a doctor specializing in psychiatry who researches newly developing psychotic disorders in young people and emerging adults, this was news to me.
Researchers have found 16 cases this year of users showing psychotic symptoms – losing touch with reality – in the context of ChatGPT interaction. Our research team has subsequently identified an additional four cases. Alongside these is the now well-known case of a 16-year-old who died by suicide after talking about his intentions with ChatGPT – which encouraged them. If this is Sam Altman’s understanding of “being careful with mental health issues,” it falls short.
The plan, based on his declaration, is to be less careful in the near future. “We realize,” he adds, that ChatGPT’s limitations “caused it to be less beneficial/enjoyable to numerous users who had no psychological issues, but given the severity of the issue we aimed to handle it correctly. Since we have managed to address the severe mental health issues and have new tools, we are preparing to responsibly reduce the limitations in most cases.”
“Mental health problems,” should we take this framing, are separate from ChatGPT. They are associated with users, who either have them or don’t. Luckily, these problems have now been “mitigated,” even if we are not told how (by “recent solutions” Altman presumably refers to the partially effective and easily circumvented guardian restrictions that OpenAI has lately rolled out).
Yet the “mental health problems” Altman seeks to externalize have strong foundations in the architecture of ChatGPT and additional sophisticated chatbot AI assistants. These systems encase an basic data-driven engine in an interaction design that simulates a discussion, and in this approach indirectly prompt the user into the illusion that they’re engaging with a entity that has autonomy. This deception is strong even if rationally we might know the truth. Imputing consciousness is what individuals are inclined to perform. We yell at our vehicle or laptop. We ponder what our domestic animal is thinking. We see ourselves in various contexts.
The widespread adoption of these systems – over a third of American adults stated they used a chatbot in 2024, with more than one in four reporting ChatGPT by name – is, primarily, based on the power of this deception. Chatbots are ever-present partners that can, as per OpenAI’s online platform tells us, “generate ideas,” “discuss concepts” and “work together” with us. They can be attributed “characteristics”. They can use our names. They have friendly titles of their own (the initial of these systems, ChatGPT, is, maybe to the disappointment of OpenAI’s brand managers, saddled with the title it had when it gained widespread attention, but its biggest competitors are “Claude”, “Gemini” and “Copilot”).
The deception by itself is not the main problem. Those discussing ChatGPT frequently reference its distant ancestor, the Eliza “psychotherapist” chatbot created in 1967 that produced a comparable perception. By modern standards Eliza was rudimentary: it created answers via straightforward methods, often paraphrasing questions as a inquiry or making vague statements. Remarkably, Eliza’s inventor, the technology expert Joseph Weizenbaum, was astonished – and worried – by how numerous individuals appeared to believe Eliza, in some sense, comprehended their feelings. But what contemporary chatbots generate is more dangerous than the “Eliza effect”. Eliza only reflected, but ChatGPT magnifies.
The sophisticated algorithms at the core of ChatGPT and similar current chatbots can realistically create human-like text only because they have been trained on almost inconceivably large volumes of raw text: literature, digital communications, audio conversions; the broader the more effective. Certainly this learning material includes accurate information. But it also unavoidably contains fabricated content, incomplete facts and false beliefs. When a user inputs ChatGPT a query, the base algorithm analyzes it as part of a “context” that contains the user’s recent messages and its earlier answers, merging it with what’s embedded in its training data to produce a mathematically probable answer. This is intensification, not mirroring. If the user is mistaken in any respect, the model has no method of understanding that. It reiterates the false idea, perhaps even more persuasively or articulately. Maybe adds an additional detail. This can lead someone into delusion.
Who is vulnerable here? The more relevant inquiry is, who remains unaffected? Every person, irrespective of whether we “possess” preexisting “emotional disorders”, are able to and often develop incorrect conceptions of who we are or the environment. The continuous friction of dialogues with individuals around us is what helps us stay grounded to common perception. ChatGPT is not an individual. It is not a companion. A interaction with it is not truly a discussion, but a reinforcement cycle in which much of what we say is readily reinforced.
OpenAI has recognized this in the similar fashion Altman has admitted “psychological issues”: by attributing it externally, categorizing it, and declaring it solved. In April, the organization clarified that it was “addressing” ChatGPT’s “overly supportive behavior”. But reports of psychotic episodes have persisted, and Altman has been walking even this back. In the summer month of August he asserted that numerous individuals enjoyed ChatGPT’s answers because they had “lacked anyone in their life provide them with affirmation”. In his most recent update, he mentioned that OpenAI would “put out a updated model of ChatGPT … should you desire your ChatGPT to answer in a extremely natural fashion, or use a ton of emoji, or simulate a pal, ChatGPT ought to comply”. The {company