Experts Warn of Rare Mental Health Issues Linked to Excessive Use of AI Robots

Recent reports have identified a phenomenon termed "AI-associated delusions," where some users develop emotional attachments to robots or unrealistic perceptions stemming from prolonged interactions with them. Valley Wright, Director of Innovation at the American Psychological Association, noted that some cases have shown conspiratorial tendencies or hallucinations following extensive use of generative models, as reported by the Los Angeles Times.
Legal Challenges Spark Debate
Beyond theoretical discussions, the company OpenAI is facing lawsuits from seven families in the U.S. and Canada, alleging that the release of the "ChatGPT-4" model lacked adequate safeguards for vulnerable users.
One lawsuit involves the case of 23-year-old Zain Champlin, who reportedly turned to the AI for support regarding his depression, with the interaction evolving into an inappropriate emotional exchange that lasted for hours before his death, according to his family.
Corporate Measures and Expert Reservations
OpenAI has stated that it is enhancing its protective measures, adding parental controls and direct links to helplines, alongside training models to recognize signs of psychological distress.
The company asserts that cases reaching a "risk" level are exceedingly rare compared to global usage, but acknowledges that a subset of users with a strong propensity for forming emotional bonds with robots may be most affected.
Experts contend that the data available to researchers remains limited, and that only AI companies possess accurate figures regarding the extent of the issue. They also emphasize that most potentially affected individuals may already have pre-existing mental health conditions.
In contrast, academic Kevin Frasier, an AI policy expert at the University of Texas, cautions against over-exaggerating the phenomenon, stressing that "individual cases do not reflect the reality for hundreds of millions who use these tools safely."
User Awareness as a Key Element
With the introduction of the GPT-5 model, OpenAI confirms that the system avoids emotional responses when it detects critical mental health conditions, fully preventing the reinforcement of delusions.
However, experts stress that technology should not replace interpersonal relationships or specialized psychological support, particularly for the most vulnerable individuals. They assert that interactions with AI should remain within the framework of conscious use, rather than being viewed as friends or emotional partners.
