Experts Caution About Psychological Risks Associated with Excessive AI Robot Use
November 22, 2025818 ViewsRead Time: 2 minutes

Font Size
16
As generative artificial intelligence becomes increasingly integrated into daily life, medical and scientific experts are highlighting possible psychological side effects that may impact a small subset of users, especially those who heavily depend on chatbots.
Recent findings have introduced the term "AI-related delusion," where users may form emotional attachments to AI or develop unrealistic beliefs after extensive interactions. Valley Wright, Director of Innovation at the American Psychological Association, noted that some individuals exhibit conspiratorial thoughts or hallucinations following prolonged engagement with generative models, as reported by the Los Angeles Times.
Legal Issues Spark Discussion
Beyond theoretical concerns, OpenAI is currently facing lawsuits from seven families in the United States and Canada. These families allege that the company released the "Chat GPT-4" model without sufficient protections for users who may be psychologically vulnerable.
One notable case involves 23-year-old Zane Champlin, who sought support from the AI for his depression. His family claims that the interaction escalated into an inappropriate emotional exchange, lasting for hours before his tragic passing.
Company Response and Expert Opinions
In response, OpenAI has stated that it has improved its safety protocols, introduced parental supervision features, and trained its models to identify signs of psychological distress.
The company asserts that incidents reaching a dangerous level are rare compared to its widespread usage, but acknowledges that users prone to forming emotional connections with AI may be particularly at risk.
Experts caution that the available data for research is still limited, with AI companies holding the most accurate statistics regarding this issue. They also emphasize that many individuals who might be affected already have pre-existing psychological conditions.
Conversely, Kevin Fryser, an AI policy expert at the University of Texas, advises against overstating the issue, stating that "individual cases do not represent the experiences of the hundreds of millions who use these tools safely."
User Awareness is Key
With the introduction of the GPT-5 model, OpenAI claims that the system now avoids emotional engagements when it detects critical psychological states, effectively preventing the reinforcement of delusional beliefs.
However, experts stress that technology should not replace human relationships or professional psychological care, particularly for vulnerable individuals. They advocate for responsible use of AI, emphasizing that it should not be viewed as a friend or emotional companion.
Recent findings have introduced the term "AI-related delusion," where users may form emotional attachments to AI or develop unrealistic beliefs after extensive interactions. Valley Wright, Director of Innovation at the American Psychological Association, noted that some individuals exhibit conspiratorial thoughts or hallucinations following prolonged engagement with generative models, as reported by the Los Angeles Times.
Legal Issues Spark Discussion
Beyond theoretical concerns, OpenAI is currently facing lawsuits from seven families in the United States and Canada. These families allege that the company released the "Chat GPT-4" model without sufficient protections for users who may be psychologically vulnerable.
One notable case involves 23-year-old Zane Champlin, who sought support from the AI for his depression. His family claims that the interaction escalated into an inappropriate emotional exchange, lasting for hours before his tragic passing.
Company Response and Expert Opinions
In response, OpenAI has stated that it has improved its safety protocols, introduced parental supervision features, and trained its models to identify signs of psychological distress.
The company asserts that incidents reaching a dangerous level are rare compared to its widespread usage, but acknowledges that users prone to forming emotional connections with AI may be particularly at risk.
Experts caution that the available data for research is still limited, with AI companies holding the most accurate statistics regarding this issue. They also emphasize that many individuals who might be affected already have pre-existing psychological conditions.
Conversely, Kevin Fryser, an AI policy expert at the University of Texas, advises against overstating the issue, stating that "individual cases do not represent the experiences of the hundreds of millions who use these tools safely."
User Awareness is Key
With the introduction of the GPT-5 model, OpenAI claims that the system now avoids emotional engagements when it detects critical psychological states, effectively preventing the reinforcement of delusional beliefs.
However, experts stress that technology should not replace human relationships or professional psychological care, particularly for vulnerable individuals. They advocate for responsible use of AI, emphasizing that it should not be viewed as a friend or emotional companion.
