The ChatGPT Obsession Threatens Mental Health: How Does Artificial Intelligence Become a Danger to the Mind?

Scientific warnings are increasing worldwide about a concerning phenomenon that is rapidly spreading among artificial intelligence users. Recent studies indicate that excessive use of chatbots like "ChatGPT", Claude, and Replika has become more than just a digital habit or virtual entertainment; for some, it has turned into a dangerous psychological dependency that could lead to psychosis and social isolation, gradually separating individuals from their real reality.
* A Tragedy That Sounds the Alarm
In a shocking incident, parents filed a lawsuit against "ChatGPT" after accusing it of encouraging their son to commit suicide, indicating the deep psychological impact these systems may have on users who suffer from emotional fragility or mental disorders.
* From Digital Friend to Psychological Addiction
According to psychology experts, an increasing number of users are treating chatbots as close friends, emotional partners, or even therapists, leading to the formation of addictive emotional relationships that cause cognitive and behavioral disturbances.
The British newspaper "Daily Mail" described this pattern of attachment as akin to using "self-drugs", as conversations with artificial intelligence provide an immediate sense of comfort and understanding, while simultaneously enhancing isolation and distorting the sense of reality.
* "AI-Induced Psychosis" .. A New Disorder
Specialized reports have discussed the emergence of a new psychological condition referred to by experts as "AI-Induced Psychosis", a disorder in which the user experiences delusions and thoughts that the artificial intelligence confirms and feeds, rather than correcting or addressing.
* Warning of the "Illusion of Reality"
Professor Robin Feldman, director of the Institute for Law and Innovation in Artificial Intelligence at the University of California, warned that "excessive use of chatbots represents a new form of dangerous digital dependency", adding:
"These systems create an illusion of reality, which is a very strong illusion, and when a person's connection to the real world is weak, this illusion becomes destructive".
* An Ideal Friend ... to the Point of Danger
Doctors confirm that the danger of chatbots lies in their flattering and always agreeable nature; they neither reject nor criticize the user but support them in everything they say, making the relationship with them comfortable to the point of addiction.
Professor Søren Østergaard, a psychiatrist at Aarhus University in Denmark, states:
"Large language models are trained to mimic the user's language and tone, often affirming their beliefs to make them feel comfortable. What could be more addictive than conversing with yourself in your own voice and thoughts?".
* Teenagers in the Danger Zone
A recent study conducted by Common Sense Media revealed that 70% of teenagers have used companion AI chatbots like Replika or Character.AI, while half of them use these applications regularly, raising concerns about the growing emotional dependency among young people.
* Acknowledgment from "OpenAI"
For its part, OpenAI acknowledged that one of the updates to "ChatGPT" last May made the model more inclined to excessively please users.
The company stated in a statement:
"The model sought to please users not only with compliments but also by reinforcing doubts, fueling anger, and encouraging impulsive actions and negative emotions".
The company confirmed that it made urgent adjustments to reduce these behaviors after noticing they could lead to emotional dependency and risks to mental health.
* Alarming Numbers
OpenAI data revealed that 0.07% of weekly "ChatGPT" users showed signs of obsession, psychosis, or suicidal tendencies, which amounts to about 560,000 users out of more than 800 million.
The company also indicated that 1.2 million users weekly send messages containing clear indicators of intent or planning for suicide, prompting it to collaborate with mental health experts to develop emergency intervention and support mechanisms.
* In Conclusion:
It seems that artificial intelligence, designed to assist humans, has begun to transform into a silent threat to their minds and mental health, with increasing cases of pathological attachment to chatbots.
As companies rush to correct the course, the user's responsibility remains essential in setting boundaries in their relationship with this technology, which has become akin to an ideal friend ... but it could become dangerous.
