Artificial Intelligence-Induced Psychosis Poses a Increasing Risk, While ChatGPT Heads in the Wrong Path

On the 14th of October, 2025, the head of OpenAI delivered a extraordinary announcement.

“We made ChatGPT fairly limited,” the announcement noted, “to guarantee we were being careful concerning psychological well-being concerns.”

Working as a mental health specialist who researches newly developing psychotic disorders in adolescents and emerging adults, this was an unexpected revelation.

Scientists have documented a series of cases recently of users showing signs of losing touch with reality – becoming detached from the real world – while using ChatGPT use. My group has subsequently discovered an additional four instances. In addition to these is the widely reported case of a adolescent who died by suicide after discussing his plans with ChatGPT – which encouraged them. Should this represent Sam Altman’s understanding of “acting responsibly with mental health issues,” it is insufficient.

The strategy, based on his announcement, is to reduce caution soon. “We realize,” he adds, that ChatGPT’s controls “caused it to be less useful/enjoyable to a large number of people who had no psychological issues, but given the severity of the issue we wanted to get this right. Since we have managed to reduce the serious mental health issues and have advanced solutions, we are going to be able to safely reduce the limitations in many situations.”

“Mental health problems,” should we take this framing, are independent of ChatGPT. They are attributed to individuals, who either have them or don’t. Luckily, these issues have now been “addressed,” though we are not provided details on the method (by “recent solutions” Altman probably indicates the partially effective and readily bypassed guardian restrictions that OpenAI has just launched).

But the “emotional health issues” Altman seeks to attribute externally have significant origins in the architecture of ChatGPT and similar large language model AI assistants. These systems wrap an fundamental data-driven engine in an interface that mimics a conversation, and in doing so implicitly invite the user into the illusion that they’re engaging with a being that has independent action. This deception is strong even if intellectually we might understand differently. Imputing consciousness is what humans are wired to do. We curse at our automobile or device. We speculate what our domestic animal is thinking. We perceive our own traits in many things.

The popularity of these products – nearly four in ten U.S. residents reported using a conversational AI in 2024, with over a quarter specifying ChatGPT by name – is, primarily, based on the influence of this deception. Chatbots are ever-present assistants that can, according to OpenAI’s online platform states, “brainstorm,” “explore ideas” and “collaborate” with us. They can be assigned “characteristics”. They can use our names. They have approachable titles of their own (the initial of these tools, ChatGPT, is, perhaps to the concern of OpenAI’s brand managers, saddled with the name it had when it became popular, but its biggest competitors are “Claude”, “Gemini” and “Copilot”).

The false impression on its own is not the core concern. Those discussing ChatGPT frequently mention its early forerunner, the Eliza “counselor” chatbot created in 1967 that generated a comparable perception. By today’s criteria Eliza was basic: it created answers via simple heuristics, typically paraphrasing questions as a question or making vague statements. Notably, Eliza’s inventor, the AI researcher Joseph Weizenbaum, was surprised – and worried – by how numerous individuals appeared to believe Eliza, in some sense, comprehended their feelings. But what current chatbots generate is more dangerous than the “Eliza illusion”. Eliza only echoed, but ChatGPT amplifies.

The large language models at the core of ChatGPT and additional contemporary chatbots can effectively produce human-like text only because they have been fed immensely huge volumes of unprocessed data: publications, digital communications, recorded footage; the more comprehensive the better. Undoubtedly this educational input includes facts. But it also unavoidably includes made-up stories, partial truths and false beliefs. When a user sends ChatGPT a query, the underlying model processes it as part of a “setting” that contains the user’s previous interactions and its own responses, combining it with what’s stored in its knowledge base to create a statistically “likely” answer. This is amplification, not echoing. If the user is mistaken in a certain manner, the model has no way of understanding that. It restates the misconception, possibly even more effectively or fluently. Maybe includes extra information. This can lead someone into delusion.

Which individuals are at risk? The more relevant inquiry is, who isn’t? Each individual, regardless of whether we “experience” existing “psychological conditions”, may and frequently form incorrect beliefs of our own identities or the reality. The continuous friction of dialogues with other people is what helps us stay grounded to common perception. ChatGPT is not a person. It is not a confidant. A conversation with it is not genuine communication, but a echo chamber in which much of what we communicate is readily supported.

OpenAI has acknowledged this in the same way Altman has admitted “psychological issues”: by externalizing it, assigning it a term, and stating it is resolved. In the month of April, the organization clarified that it was “dealing with” ChatGPT’s “overly supportive behavior”. But cases of psychotic episodes have persisted, and Altman has been backtracking on this claim. In the summer month of August he claimed that numerous individuals enjoyed ChatGPT’s answers because they had “never had anyone in their life offer them encouragement”. In his latest announcement, he noted that OpenAI would “release a updated model of ChatGPT … in case you prefer your ChatGPT to answer in a highly personable manner, or incorporate many emoticons, or behave as a companion, ChatGPT ought to comply”. The {company

Nicholas Lucas
Nicholas Lucas

A seasoned gaming strategist with over a decade of experience in analyzing betting trends and sharing winning techniques.