Back

Florida AG launches criminal probe of OpenAI over ChatGPT's role in 2025 university shooting

Florida Attorney General James Uthmeier disclosed Tuesday that the state’s Office of Statewide Prosecution has opened a criminal investigation targeting OpenAI and its ChatGPT platform. The probe stems from the 2025 mass shooting at Florida State University, where investigators say the gunman consulted the AI assistant in the weeks leading up to the attack.

Uthmeier cited Florida statutes that make anyone who aids, abets, or counsels a crime a principal if the offense is carried out. "If ChatGPT’s responses helped the shooter plan or execute his actions, the law could treat the tool as an accomplice," the AG said. The investigation will focus on whether the chatbot’s answers went beyond providing publicly available facts and crossed into facilitating illegal conduct.

OpenAI responded promptly, emphasizing that the model delivered only factual information drawn from open sources and never encouraged violence. The company said it identified the suspect’s ChatGPT account after the shooting, shared the user’s details with law enforcement, and continues to cooperate fully. "ChatGPT is a general‑purpose tool used by hundreds of millions for legitimate purposes," a spokesperson said, adding that OpenAI is constantly refining safeguards to detect harmful intent and limit misuse.

As part of the inquiry, Florida officials have subpoenaed OpenAI for a broad set of documents, including all internal policies, training materials related to handling threats of self‑harm or harm to others, the company’s organizational chart, and any public statements about the shooting. The AG’s office hopes the materials will reveal whether OpenAI’s safety mechanisms were adequate and if the firm failed to act on warning signs.

The Florida case follows earlier scrutiny of OpenAI’s role in violent incidents. Canadian regulators previously urged the company to overhaul its approach after a Wall Street Journal report alleged that OpenAI flagged a Canadian shooting suspect’s account in 2025 but did not promptly alert authorities. In March, OpenAI agreed to new protocols for cooperating with Canadian law enforcement. Separately, the company faces a wrongful‑death lawsuit filed by the family of a teenage user who died by suicide in 2025, alleging the AI contributed to the tragedy.

Legal experts note that holding an AI provider criminally liable is unprecedented in the United States. While the investigation could set a landmark precedent, prosecutors must demonstrate that the chatbot’s outputs were more than neutral facts and that OpenAI knowingly enabled the shooter’s plans. The outcome may shape future regulations on AI safety and the responsibilities of technology firms.

OpenAI has pledged to continue working with authorities and to strengthen its content‑moderation systems. "We remain committed to protecting the public and ensuring our technology is used responsibly," the company said. The investigation is ongoing, and no charges have been filed against OpenAI or its executives at this time.

Used: News Factory APP - news discovery and automation - ChatGPT for Business

Source: Engadget

Also available in: