In a letter to the citizens of Tumbler Ridge, Canada, OpenAI CEO Sam Altman stated he’s “deeply sorry” that his corporate did not alert regulation enforcement in regards to the suspect in a up to date mass capturing.
After police recognized 18-year-old Jesse Van Rootselaar as a suspected shooter who allegedly killed 8 folks, the Wall Boulevard Magazine reported that OpenAI had flagged and banned Van Rootselaar’s ChatGPT account in June 2025 for describing situations involving gun violence. The corporate’s workforce debated alerting police however in the long run determined towards it, sooner or later achieving out to Canadian government after the capturing.
OpenAI has since stated that it’s bettering protection protocols, as an example by means of hanging extra versatile standards in position to resolve when accounts get referred to government, and by means of setting up direct issues of touch with Canadian regulation enforcement.
In Altman’s letter, which used to be first revealed within the native newspaper Tumbler RidgeLines, the CEO stated he’d mentioned the capturing with Tumbler Ridge Mayor Darryl Krakowka and British Columbia Premier David Eby, they usually’d all agreed “a public apology used to be vital,” however “time used to be additionally had to recognize the neighborhood as you grieved.”
“I’m deeply sorry that we didn’t alert regulation enforcement to the account that used to be banned in June,” Altman stated. “Whilst I do know phrases can by no means be sufficient, I consider an apology is vital to acknowledge the hurt and irreversible loss your neighborhood has suffered.”
Altman additionally stated that OpenAI’s center of attention will “proceed to be on operating with all ranges of presidency to lend a hand be certain not anything occurs like this once more.”
In a submit on X, Eby stated Altman’s apology is “vital, and but grossly inadequate for the devastation completed to the households of Tumbler Ridge.”
Techcrunch tournament
San Francisco, CA
|
October 13-15, 2026
Canadian officers have stated they’re taking into account new laws on synthetic intelligence however have no longer made any ultimate selections.
Whilst you acquire thru hyperlinks in our articles, we would possibly earn a small fee. This doesn’t have an effect on our editorial independence.



