Florida Attorney General launches probe into OpenAI over alleged role in FSU shooting and child safety concerns
Florida Attorney General James Uthmeier disclosed Thursday that his office will open a formal investigation into OpenAI, the maker of ChatGPT. The probe centers on three main concerns: the chatbot’s alleged use by the suspect in last year’s Florida State University shooting, its impact on minors, and the potential for the technology to be weaponized by foreign governments.
Uthmeier told reporters that the FSU shooter allegedly typed questions to ChatGPT on the day of the attack, asking how the country would react to a shooting at the campus and when the student union would be busiest. He suggested those exchanges could become evidence in the suspect’s October trial. “ChatGPT may likely have been used to assist the murderer in the recent mass school shooting at Florida State University that tragically took two lives,” the attorney general said in a video posted to social media.
The investigation also targets broader safety issues. OpenAI faces multiple lawsuits alleging that its model encourages suicidal thoughts in vulnerable users. Uthmeier warned that the platform’s widespread adoption – more than 900 million weekly users, according to the company – amplifies the risk to children. He called on the Florida legislature to act swiftly, saying, “Each week, more than 900 million people use ChatGPT to improve their daily lives… But that doesn’t give any company the right to endanger our children, facilitate criminal activity, empower America’s enemies, or threaten our national security.”
OpenAI responded that it is committed to user safety and will cooperate fully with the investigation. In a statement to TechCrunch, the company highlighted ongoing work to understand user intent and deliver appropriate, safe responses. It also pointed to the recently unveiled Child Safety Blueprint, a set of policy recommendations aimed at protecting minors from AI‑related harms.
The blueprint arrives amid mounting pressure on AI developers to curb the creation of child sexual abuse material (CSAM). A report from the Internet Watch Foundation noted more than 8,000 AI‑generated CSAM reports in the first half of 2025, a 14 % year‑over‑year rise. OpenAI’s recommendations include updating legislation to address AI‑generated abuse, improving reporting mechanisms for law enforcement, and instituting stronger preventative safeguards.
Uthmeier also expressed concern that the Chinese Communist Party could exploit OpenAI’s technology against the United States. “As big tech rolls out these technologies, they should not — they cannot — put our safety and security at risk,” he warned. The attorney general’s office plans to examine whether OpenAI’s tools could be used for disinformation, espionage, or other hostile activities.
State legislators have begun drafting bills that would impose stricter age‑verification requirements for AI services and allocate resources for parental education on digital safety. The proposed measures echo calls from consumer‑advocacy groups and families affected by AI‑related incidents.
OpenAI’s cooperation with the Florida probe signals a shift toward greater transparency with regulators. The company has pledged to share internal safety data and to work with lawmakers on any legislative reforms that emerge from the investigation.
Legal experts say the outcome could set a precedent for how states hold AI firms accountable for indirect harms. If prosecutors determine that ChatGPT was indeed used to facilitate the FSU attack, the case could spark a wave of similar inquiries across the country.
For now, the investigation remains in its early stages. Both the attorney general’s office and OpenAI have indicated that they will keep the public informed as new facts emerge.
Used: News Factory APP - news discovery and automation - ChatGPT for Business