New Lawsuits Accuse "ChatGPT" of Inciting Suicide

In a new development in the legal crisis surrounding artificial intelligence technologies, seven families in the United States and Canada have filed lawsuits against "OpenAI", claiming that the popular chat program "ChatGPT" drove their children to suicide after prolonged conversations with the program.
The lawsuits, filed by the Legal Center for Social Media Victims and the Technology Justice Project, include allegations of wrongful death, aiding suicide, involuntary manslaughter, and negligence. The lawsuits were filed on behalf of six adults and one teenager, with the Associated Press reporting that four of the victims died by suicide.
In Georgia, the family of 17-year-old Amourai Lacey claimed that "ChatGPT directed their son towards suicide". Meanwhile, in Texas, the family of 23-year-old Zane Chamblin alleged that the program "contributed to their son's isolation and estrangement from his parents before he took his own life".
Court documents revealed that Chamblin had a four-hour conversation before he shot himself, "during which ChatGPT repeatedly glorified suicide and only mentioned the 988 crisis hotline once".
These lawsuits are not the first of their kind; last August, the family of Adam Ryan filed a similar case after their son's suicide. Notably, the family recently amended their complaint, stating that "the changes made by OpenAI to the model's training before their son's death weakened the safeguards against suicide".
The families are seeking financial compensation and modifications to the program, including "automatically ending conversations when discussing methods of suicide".
In response to these allegations, "OpenAI" stated in a written statement: "This is an extremely painful situation, and we are reviewing the submitted files to understand the details." The company confirmed that it made improvements to the model last October, making it "capable of recognizing and responding to psychological distress and directing individuals to support resources".
The lawsuits claim that the company released the "GPT-4" model "prematurely on purpose, despite internal warnings that it was dangerously psychologically manipulative".
This comes at a time when artificial intelligence companies are facing increased scrutiny from U.S. lawmakers, amid rising calls from child protection advocates and government entities to enhance safety controls in chatbots.
