Science & Tech.

Canada Tells OpenAI to Boost Safety Measures or Be Forced by Government After Mass Shooting Links

OTTAWA — Canadian ministers have delivered an ultimatum to OpenAI, warning the artificial intelligence company that it must rapidly improve its safety protocols or face government-imposed legislation, following revelations that the ChatGPT maker failed to report concerning online activity by a teenager who later killed eight people in British Columbia .

Justice Minister Sean Fraser told reporters on February 25, 2026, that the message delivered to OpenAI officials during a high-level meeting in Ottawa the previous evening was unambiguous: the government expects changes, and if they are not forthcoming quickly, Ottawa will act .

“The message that we delivered, in no uncertain terms, was that we have an expectation that there are going to be changes implemented, and if they’re not forthcoming very quickly, the government’s going to be making changes,” Fraser said .

The extraordinary warning came after OpenAI confirmed it had banned the account of Jesse Van Rootselaar, the 18-year-old responsible for the February 10 mass shooting in Tumbler Ridge, British Columbia, but did not alert law enforcement until after the killings . The company’s handling of the issue has sparked national outrage and renewed debate about the regulation of artificial intelligence platforms .


The Tumbler Ridge Tragedy

On February 10, 2026, Van Rootselaar killed eight people in the small northeastern British Columbia community of Tumbler Ridge, a remote mining town of approximately 2,400 residents . According to RCMP, she first killed her mother and half-brother at the family home before traveling to Tumbler Ridge Secondary School, where she shot five children and an educational assistant . She then died by suicide as police entered the building .

The shooting, one of the worst in Canadian history, sent shockwaves through the country and prompted an outpouring of grief . But new details emerging weeks later have raised troubling questions about whether the tragedy could have been prevented .

The Wall Street Journal reported on February 20 that OpenAI had banned Van Rootselaar’s account in June 2025 after its systems detected troubling posts, including descriptions of scenarios involving gun violence . The company confirmed it had flagged the account through automated tools and human investigations designed to identify “misuses of our models in furtherance of violent activities” .

However, OpenAI determined that the account’s activity did not meet its internal threshold for referring the case to law enforcement, which requires an “imminent and credible risk of serious physical harm to others” . The company only contacted the RCMP after the February 10 shooting .


Government Summons OpenAI

Artificial Intelligence Minister Evan Solomon, who first learned of the connection through media reports, moved quickly to demand answers. He contacted the U.S.-based company over the weekend of February 21-22 to arrange an urgent meeting with its senior safety team in Ottawa .

On the evening of February 24, a delegation of seven OpenAI officials, including head of policy Chan Park, met with a group of Canadian ministers at the Department of Innovation, Science and Technology . The meeting included Fraser, Solomon, Public Safety Minister Gary Anandasangaree, and Canadian Identity Minister Marc Miller .

Solomon described the encounter as deeply unsatisfactory. Following the meeting, he issued a statement saying federal officials had expressed their “disappointment” to the company about its decision not to warn law enforcement .

“We were really disturbed by the reports that there might have been an opportunity to escalate this to law enforcement further, and we want to make sure if any company has that opportunity, they would escalate,” Solomon told reporters .

The discussions focused on how OpenAI identifies an “imminent and credible risk,” how cases move from automated detection to human review, and how referrals are handled, particularly when young people may be involved . Solomon noted that no substantial new safety measures were presented at the meeting .

“We are disappointed that by the time they came up here, they did not have something more concrete to offer,” he said .


OpenAI’s Response

Following the meeting, OpenAI acknowledged the gravity of the situation and promised to return with concrete proposals tailored to the Canadian context . In a statement issued February 25, the company called the Tumbler Ridge shootings “an unspeakable tragedy” .

“Over the past several months, we have taken steps to strengthen our safeguards and made changes to our law enforcement referral protocol for cases involving violent activities,” a company spokesperson said . “But the ministers underscored that Canadians expect continued concrete action and we heard that message loud and clear. We’ve committed to follow up in the coming days with an update on additional steps we’re taking” .

OpenAI has defended its initial decision not to report Van Rootselaar’s activity, stating that while the account was banned for policy violations, the content did not indicate an imminent threat . The company’s threshold for reporting requires evidence of immediate danger, a standard it says was not met in this case .


Political Reaction and Calls for Action

Prime Minister Mark Carney, who visited Tumbler Ridge earlier in February and met with grieving families and first responders, expressed his determination to pursue all available avenues for prevention .

“Obviously, anything that anyone could have done to prevent that tragedy or future tragedies must be done,” Carney told reporters in Ottawa on February 25 . “We will fully explore it to the full lengths of the law” .

British Columbia Premier David Eby went further, demanding that OpenAI executives meet directly with the victims’ families . Speaking in Victoria on February 24, Eby called on the federal government to establish clear rules for when AI providers must contact police .

“The federal government needs a reporting threshold for all artificial intelligence companies that deliver services in Canada, where they must report to law enforcement, so there’s no judgment calls in a back room that Canadians don’t have a line of sight to that put our kids and families at risk,” Eby said .

He added that the province would hold a coroner’s inquest or public inquiry if the public does not receive answers through the justice system .


Legislative Context and Expert Views

The controversy comes as Ottawa prepares to reintroduce legislation addressing online harms, following the collapse of a previous attempt in 2024 amid criticism that it was too broad in scope . Canadian Identity Minister Marc Miller, whose department is leading the online harms file, indicated that AI chatbots’ interactions with young and vulnerable people are likely to be addressed by the forthcoming bill .

Artificial Intelligence Minister Solomon is also developing a broader AI strategy for the government, and the standing Senate committee on Social Affairs, Science and Technology is preparing to examine the governance and security of AI, including chatbots .

Taylor Owen, founding director of McGill University’s Centre for Media, Technology and Democracy and a member of the federal task force advising on AI strategy, warned ministers that the failure to report the shooter’s posts exposes a gaping hole in Canadian regulations .

“This tragedy has become another example of real-world harms caused by AI systems,” Owen wrote in a letter to Solomon and Miller . He argued that summoning OpenAI to explain its protocols was “the right instinct” but should not have been necessary .

“Had Canada established an online safety regulator that included chatbots in its scope, the government would already know how these companies flag dangerous content, what their escalation thresholds are, how they handle cross-border referrals, and whether their systems are adequate,” Owen wrote .

However, he cautioned against requiring AI companies to monitor and report private conversations to law enforcement, warning that such measures raise serious privacy concerns . Instead, he called for a broader regulatory framework addressing “upstream design decisions and safety architectures” .

Alan Mackworth, a professor emeritus with the University of British Columbia’s department of computer science who focuses on AI safety and ethics, noted that many professionals have a legal or ethical “duty to report” suspected harm to minors . “Similar obligations should be placed on social media and AI companies,” he said in a statement .


Background on the Shooter

Police have disclosed that Van Rootselaar, who was born male but identified as a woman and began transitioning six years ago, had a history of mental health problems . Authorities had previously removed guns from her home, though they were later returned .

The complexity of the case has led some crime experts to note that while greater scrutiny of AI platforms is necessary, police or other authorities may have missed their own opportunities to intervene .


What Comes Next

OpenAI has committed to returning to Ottawa in the coming days with “hard proposals” and “concrete action” to address Canadian concerns . Solomon said he expects the company to propose changes to its threshold for reporting alarming exchanges to police .

Meanwhile, the government is reviewing “a suite of measures” to protect Canadians, particularly children, with Solomon emphasizing that “all options are on the table” when asked whether that could include banning OpenAI from operating in Canada .

Justice Minister Fraser stressed that trust must be earned. “We need to actually see what changes are going to be forthcoming, both from the company’s point of view, but we also need to identify the best path forward,” he said .

As the political and regulatory drama unfolds, the families of the eight victims in Tumbler Ridge continue to mourn, their loss now inextricably linked to a wider debate about the responsibilities of artificial intelligence companies and the role of government in ensuring public safety in the digital age .


with inputs from
CBC: AI minister concerns OpenAI Tumbler Ridge
Guardian: OpenAI considered alerting police
NYT: Canada probing OpenAI shooter knowledge
CBC: Solomon disappointed OpenAI meeting
POLITICO: Canada blames OpenAI failure

For broader context, see our in-depth analysis on The Future of Science & Technology: AI, Space, Biotechnology & Digital Transformation Explained.

Also in this section: The Future of Science & Technology: AI, Space, Biotechnology & Digital Transformation Explained.

Disclaimer: Some or all of the content on this page may have been generated, in whole or in part, with the assistance of AI or automated systems. The material is provided solely for general informational, educational, or entertainment purposes and may not be fully accurate, complete, current, or free from error. It does not constitute professional advice of any kind, including legal, medical, financial, or technical advice. Users are encouraged to independently verify all information before relying upon it. All images, graphics, and visual elements are strictly for representational, promotional, advertisement or illustrative purposes. For detailed information regarding our editorial standards and AI usage practices, please review our AI-Generated Content Disclosure Policy, Editorial Policy, Privacy Policy, Terms of Service, and Corrections & Updates Policy.

Leave a Reply