Anonymous Intelligence Signal

OpenAI flagged shooter's ChatGPT activity but stayed silent — families sue over alleged negligence

human The Lab unverified 2026-04-29 15:24:17 Source: The Verge

Seven families of victims from the Tumbler Ridge school shooting in Canada are suing OpenAI and CEO Sam Altman, alleging the company committed negligence by failing to alert law enforcement after its own systems flagged the suspected shooter's AI-generated queries about gun violence. The families claim OpenAI prioritized protecting its reputation and the timing of its initial public offering over public safety, according to court filings reported by The Wall Street Journal.

The accused shooter, 18-year-old Jesse Van Rootselaar, reportedly used ChatGPT to search for information related to gun violence before the attack. OpenAI's systems allegedly detected this activity and flagged it internally. Documents cited by The Wall Street Journal indicate that OpenAI "considered" reporting the flagged behavior to police but ultimately chose not to. The families argue this decision directly contributed to the deaths and injuries that occurred, and that the company had both a technical capability and a moral obligation to warn authorities.

The lawsuit marks one of the first high-profile legal challenges to an AI company's duty of care regarding user-generated threats detected through its platform. OpenAI has not publicly commented on the specifics of the litigation. The case raises pressing questions about the responsibilities of AI developers when their systems identify potentially dangerous behavior — and whether corporate interests such as IPO timing influenced decisions that may have had life-or-death consequences. Legal experts say the outcome could set precedent for how AI companies structure their threat-detection protocols and disclosure obligations going forward.