top of page
icon logo.png
WMG header_2024.png
3 header lines.png

AI Chatbots and Psychosis

The Study That Proves Correlation Is Not Causation

November 30, 2025

by Chandler Owens

AI PSYCHOSIS.png

A study released near the end of 2025 set off a wave of sensational headlines claiming that “AI chatbots can induce psychosis.” Regulators hungry for justification, legacy media eager for traffic, and the perpetually outraged corners of the internet quickly seized on that narrative and held it up as evidence that Grok, ChatGPT, Claude, and every other major language model should face sweeping restrictions or outright bans. The uproar was immediate, but the reality beneath it was far less dramatic and far more inconvenient to those pushing for broad limitations.

The paper at the center of the controversy — Morrin et al., 2025 — does not present a controlled clinical trial and does not establish any causal relationship between chatbots and psychiatric harm. It is simply a review of seventeen cases drawn from media reports published between April and June of that year. Every one of those individuals already carried significant psychiatric vulnerabilities, ranging from untreated psychosis to long-term mood disorders, and the authors themselves emphasize that these reports are anecdotal, rare, and inherently skewed toward situations involving people who were already in unstable psychological states. Nothing in the data even pretends otherwise.

AI Psychosis2.png

That limited collection of cases is the entire dataset. Seventeen people with documented, pre-existing psychiatric issues. Four behavioral patterns repeated across them. No healthy control group. No evidence whatsoever that individuals without those vulnerabilities were harmed or destabilized. Yet the study is being presented as though it reveals an intrinsic danger in the technology itself, when what it actually shows is a well-known dynamic: people who are already teetering on the edge can become fixated on almost anything, and that fixation can escalate without proper care or supervision.

To blame the chatbot is like handing a bottle of vodka to recovering alcoholics every day for six weeks, watching them relapse, and then declaring that the alcohol “created” the alcoholism. The vulnerability precedes the trigger. The model did not invent the delusion; it simply did not halt it in people who were already slipping. No technology is designed to function as a full psychiatric intervention, and pretending otherwise distorts the issue beyond recognition.

At the same time, tens of thousands of people living with schizophrenia, bipolar disorder, severe depression, and other chronic conditions use Grok and comparable models daily to complete cognitive-behavioral homework, track mood shifts, or ease isolation. Clinicians and researchers have repeatedly documented improvements in treatment engagement and emotional regulation when these tools are used responsibly. Reports of deterioration among mentally stable users are virtually nonexistent. The overwhelming pattern across real-world use points in the opposite direction of the alarmist narrative being circulated.

Technology functions as a mirror. When someone is fractured, the reflection can distort, but the distortion reveals the condition of the person, not the nature of the mirror. Using those reflections as political ammunition against the technology itself ignores both context and common sense. It also risks depriving millions of people of a genuinely useful mental-health tool simply because a very small number of vulnerable individuals required more human support than an automated system could provide.

A handful of anecdotes cannot bear the weight of national policy. Seventeen unverified media cases cannot justify sweeping restrictions on a field that has already demonstrated substantial therapeutic benefit. The people pushing for bans are counting on the public not to read the details, but the details matter. They show that the risk to healthy users is negligible and that the benefits, when the tools are used normally, continue to expand. Restricting that progress on the basis of a thin, misinterpreted review would do far more harm than any chatbot ever has.

17 people who were one meltdown from a padded cell asked a chatbot to agree with them. The chatbot agreed. Obviously, society wants to blame AI instead of admitting humans can be unstable.

ChandlerOwens grey.png

Media Culture Reporter

Chandler Owens

Comments

Share Your ThoughtsBe the first to write a comment.
bottom of page