/
ChatGPT understands but ignores complicity in shooting attacks, family advocate says

ChatGPT understands but ignores complicity in shooting attacks, family advocate says


ChatGPT understands but ignores complicity in shooting attacks, family advocate says

The shooter who killed two people at Florida State University in 2025 had an accomplice, a lawsuit alleges. ChatGPT helped plan the attack, according to the lawsuit.

The family of one of the victims killed by the accused gunman, Phoenix Ikner, is suing OpenAI, the company that owns ChatGPT. They say the chatbot helped Ikner plan the attack. American Family Radio talk host Jenna Ellis says the lawsuit claims ChatGPT was very detailed with its advice.

“CHAT-GPT allegedly told the shooter that the actions would be much more likely to gain national attention, quote-unquote, ‘if children are involved, even two to three victims can draw more attention.’”

Vandana Joshi, the widow of Tiru Chabba, who was killed alongside the university dining director Robert Morales, filed the federal lawsuit against OpenAI in Florida on Sunday.

Cochrane, Daniel (Institute for Family Studies) Cochrane

According to the complaint, Ikner, then a student at FSU, shared with ChatGPT images of firearms he had acquired. The chatbot then allegedly explained how to use them, “telling him the Glock had no safety, that it was meant to be fired ‘quick to use under stress’ and advising him to keep his finger off the trigger until he was ready to shoot," NBC News reported.

Ellis' guest Daniel Cochrane of the Institute for Family Studies says just months before the Florida State shooting, Jesse Van Rootselaar in Canada allegedly used ChatGPT to plan a mass shooting.

“In British Columbia, it helped another mass shooter plan what has been considered, I believe, the second worst mass shooting in Canadian history,” Cochrane said.

Once again the chatbot is being accused of being very specific.

“GPT actually told the shooter that at a particular time, I believe it was like 11:30, that was when it would be optimal to plan the attack,” Cochrane said.

In this case Cochrane says OpenAI employees noticed the interaction and talked about calling the police. In the end, they didn't think the threat was credible.

“The company understands, A, that this technology is being used specifically to plan crimes, specifically shootings, and B, that even though their internal systems are flagging these for them, they're often choosing to overlook them.”