OpenAI is facing a criminal investigation in the United States over whether its chatbot ChatGPT played a role in a deadly mass shooting at Florida State University last year.
Florida Attorney General James Uthmeier said Tuesday that his office has been examining how the suspected gunman used the AI tool before the attack in Tallahassee.
"Our review has revealed that a criminal investigation is necessary," Uthmeier said. "ChatGPT offered significant advice to this shooter before he committed such heinous crimes."
OpenAI rejected the allegation, saying: "ChatGPT is not responsible for this terrible crime."
The case is believed to be the first time the company has faced a criminal probe over alleged misuse of its chatbot in connection with a violent crime.
An OpenAI spokesperson said the company has been cooperating with investigators and had “proactively shared” information about a ChatGPT account believed to be linked to the suspect.
The suspect, identified as 20-year-old student Phoenix Ikner, is currently in custody awaiting trial. According to OpenAI, the chatbot “did not encourage or promote illegal or harmful activity.”
"In this case, ChatGPT provided factual responses to questions with information that could be found broadly across public sources on the internet," the spokesperson added.
However, Uthmeier alleged that ChatGPT advised the suspect on weapons, ammunition and even suggested when and where on campus a large number of people could be found.
"My prosecutors have looked at this, and they told me that if it was a person on the other end of that screen, we would be charging them with murder," he said.
He noted that under Florida law, anyone who “aids, abets or counsels” a crime can be treated as a principal offender, adding that authorities are now assessing potential “criminal culpability” for OpenAI.
OpenAI, co-founded by Sam Altman, rose to global prominence after launching ChatGPT in 2022, which has since become one of the most widely used AI tools.
The company is already facing legal challenges over another incident in British Columbia, where a separate shooting earlier this year raised concerns about the misuse of AI tools. OpenAI said it had identified and banned the suspect’s account after that incident and plans to strengthen safety measures.
The parents of a girl injured in that attack have filed a lawsuit against the company.
Concerns over AI misuse have also drawn attention from regulators. Last year, a coalition of 42 state attorneys general wrote to major tech firms including Google, Meta and Anthropic, urging stronger safeguards.
The letter warned of increasing risks as more people use AI tools without fully understanding potential dangers, citing a growing number of serious incidents across the country linked to AI use.
#From BBC