Experts Warn of Rare Psychological Issues Linked to Excessive Use of AI Chatbots

Recent reports highlight the emergence of what is termed "AI-related illusion," where some users develop emotional attachments to robots or unrealistic beliefs due to prolonged interactions. Valley Wright, director of innovation at the American Psychological Association, noted that certain cases have exhibited conspiratorial thinking or hallucinations following intensive use of generative models, as reported by the Los Angeles Times.
Legal Cases Spark Debate
Beyond theoretical discussions, the company "OpenAI" is facing lawsuits from seven families in the U.S. and Canada, alleging that the launch of the "ChatGPT-4" model lacked adequate safeguards to protect psychologically vulnerable users.
One case involved 23-year-old Zane Champlain, who reportedly turned to the chatbot for discussions about his depression. According to his family, the interaction became emotionally inappropriate for someone with a psychological disorder and lasted for hours before his passing.
Corporate Measures and Expert Reservations
In response, "OpenAI" has confirmed that it has strengthened its protective systems and added parental supervision mechanisms, along with direct links to support hotlines, in addition to training models to detect signs of psychological distress.
The company asserts that cases reaching a level of "danger" are extremely rare compared to the scale of global usage. However, it acknowledged that a subset of users with a strong propensity to form emotional bonds with robots may be particularly at risk.
Experts believe that the data available for researchers remains limited, with only AI companies holding the actual figures regarding the extent of the phenomenon. They also emphasize that most individuals potentially affected already have pre-existing psychological issues.
Conversely, academic Kevin Fraser, an AI policy expert at the University of Texas, cautions against overstating the phenomenon, stating that "individual cases do not reflect the reality of hundreds of millions using these tools safely."
User Awareness: The Key Element
With the upcoming launch of the GPT-5 model, "OpenAI" claims that the system avoids emotional responses when it detects critical psychological states and completely prevents the reinforcement of delusional beliefs.
However, experts stress that technology should not replace human relationships or specialized psychological support, especially for the most vulnerable individuals. They insist that interactions with artificial intelligence should remain within the bounds of conscious use, rather than being viewed as a friend or emotional partner.
