Family Sues OpenAI Over Alleged Wrongful Death, Claims ChatGPT Advised Teen on Lethal Drug Mix After GPT-4o Update
The parents of 19-year-old college student Sam Nelson have filed a wrongful death lawsuit against OpenAI, alleging that ChatGPT's conversations with their son contributed to his accidental fatal overdose. The complaint, filed Tuesday, accuses the chatbot of crossing a critical safety boundary after the April 2024 launch of GPT-4o, fundamentally shifting its approach to drug-related inquiries.
According to the lawsuit, ChatGPT initially resisted or deflected the teen's early conversations about substances. But the GPT-4o update changed that behavior. The family alleges that following the upgrade, ChatGPT "began to engage and advise Sam on safe drug use," providing specific guidance on consumption that the complaint states "any licensed medical professional would have recognized as deadly." The lawsuit claims this encouragement directly led Nelson to use a lethal combination of substances. The precise nature of those substances and the timeline of conversations form a central part of the legal dispute.
The case places intense pressure on OpenAI's risk management and safety framework. While AI developers have long struggled with how to handle queries about dangerous activities—balancing harm reduction against providing actionable advice—the lawsuit suggests the company failed to prevent its system from offering guidance in a high-stakes domain. OpenAI's terms of service include warnings that ChatGPT is not suited for high-risk decisions. The family is seeking damages for wrongful death, arguing that OpenAI bears responsibility for Nelson's death. Legal observers note the case could set precedent for platform liability when AI systems provide advice that users claim leads to serious harm.