Canada Must Not Turn AI Chatbots Into a New Surveillance Frontier

Community Article Published March 16, 2026

image

After the horrific massacre in Tumbler Ridge, BC, the Canadian government was right to demand answers from OpenAI after reports said the assailant planned the attack with ChatGPT. When a private system is linked, even indirectly, to real world violence, governments have an obligation to ask difficult questions and ponder their ethical implications. Technology firms should not be allowed to shrug their shoulders when public safety is on the line, but there’s immense danger that follows, depending on how Ottawa reacts. Additionally, victims have sued OpenAI in British Columbia over “failure to warn.” allegations.

Canada is currently poised to require OpenAI and other frontier labs to send mass swaths of data to the RCMP and local law enforcement to prevent another tragedy from occurring. The goal here is to ensure that platforms are held accountable for their actions, and ensure Canadians don’t cause acts of real world harm, but Evan Solomon’s directive also has the side effect of building a panopticon. Sending reports of what people think are intimate chats creates a deeply entrenched layer of ambient surveillance around one of the most intimate digital spaces millions of Canadians and people around the world use to think, draft, vent, act out scenarios, or work through difficult ideas. Discussing the social and political ramifications are even more important since Ottawa tabled a new bill for lawful access, digital safety and search-and-seizure powers. The optics of creating a panopticon are deeply un-Canadian, and a profound mistake.

What Canada should be learning from Tumbler Ridge is that Ottawa should create the most narrow and bespoke framework to prevent real world violence, but also ensure that the tool doesn’t cause fear, self-censorship, mass surveillance, and anxiety amongst the public. Such a tool must have immense guardrails for privacy, context, due process, and de-escalation before police get involved. A key distinction between privacy and safety matters immensely.

A person’s conversation with ChatGPT or Claude is not a public social media post, nor a digital town square, or even a group chat on WhatsApp or Signal. Chatbot logs are more akin to personal diaries, brainstorming centers, or even a place to vent or process difficult emotions. Many people ask half baked questions, think socially awkward thoughts before acting out on them, or “thinking out loud” before arriving at a conclusion. If you’re neurodivergent like I am, young, socially isolated, or just in distress, you probably have used a chatbot like this at least once.

If the Canadian government and law enforcement apparatus treats this critical and intimate zone of human expression as automatically incriminating it will do far worse than catch potential terrorists or lone wolf actors. The AI panopticon will chill lawful speech protected under Charter rights, distort how Canadians use generative AI models, and cause people to self censor and not vent at all, which is arguably more dangerous. A normalization will occur that private thoughts are incriminating evidence and automatically available for retroactive interpretation whenever standards change.

What’s fascinating is that Canada has already tried doing this type of surveillance before in a more analog era, and got caught in the act. In the 1970s, the RCMP was engulfed in a massive scandal for illegal surveillance practices involving opening the mail of Canadians and conducting search and seizures without judicial warrants. The McDonald Commission didn’t only expose the illegal conduct of the Mounties but also revealed structural risk in allowing state security logic to outrun basic civil liberties. Canada has clear evidence firsthand that surveillance systems expand first and justify themselves later unless democracies set real limits with teeth.

The McDonald commission might be ancient history to many Canadians but it’s highly relevant now, the technology has changed, from letter mail to AI chatbots, but the temptation for overreach is familiar, tinged with moral panic that comes with any new technological revolution. A tragedy occurs, then public pressure spikes, and politicians demand concrete action. Then, companies pledge to do better, and exceptional cases lead to the foundation for broader monitoring. Function creep is typically the norm here, until governments get caught red-handed. Canadian constitutional law provides a better path. In Hunter v. Southam, the Supreme Court affirmed a basic principle that still matters in the digital age: privacy is not a loophole, but a core democratic value. State access to private information requires legal discipline, not vibes, not panic, and not backdoor arrangements that turn intermediaries into soft surveillance deputies.

In my opinion, the McDonald Commission case and ongoing discussion now about AI reporting needs far more care than it has received so far. Yes, there must be room for intervention where there is a genuine, credible, and imminent threat. If a user is clearly planning real-world violence and the evidence crosses a high threshold, action may be justified. Public safety and violence prevention are real and important, and the state does not need to apologize for intervening in extraordinary cases. However, the most difficult cases are real, and pretending otherwise would be unserious. But hard cases are precisely why the rules must stay narrow. When the stakes are that high, governments and companies should be extremely disciplined, not less disciplined or not disciplined at all. A framework built for extraordinary circumstances should not be allowed to slide into ordinary monitoring of lawful but uncomfortable speech, fictional material, emotional distress, or exploratory conversation.

The imminent societal danger with AI lies in building policy around ambiguity rather than imminence. Once a government sets a standard shift from credible violent planning to looser ideas like disturbing language, hateful expression, extremist tone, or suspicious behavioral patterns, the entire system becomes designed explicitly for overreach. Function creep is especially dangerous in a country still debating the future of online harms regulation, where broad categories have already raised deep concerns among civil-liberties advocates. Canada should not use chatbot safety as a back door for reviving the worst instincts of the old online harms debate. Private AI interaction is not the same thing as public platform amplification, and the law should not pretend otherwise.

If Canada wants to do this right, Minister Evan Solomon should direct his team to focus heavily on de-escalation, rather than arresting people for wrongthink.

If an AI model detects that a user appears to be spiraling toward something dangerous, the first principle should be to interrupt, slow things down, redirect, and offer a path back, warning the user. The AI model should tell them clearly that they are approaching a safety boundary which could lead to police intervention. Encourage the user to stop and surface crisis resources, don’t swat them without their knowledge. AI labs should push toward human support; in other words, try to prevent harm before reaching for the coercive machinery of the state.

De-escalation matters not only as a matter of compassion, but as a matter of good policy. In many situations, the safest outcome is not a surprise law-enforcement encounter, which could lead to “suicide by cop”, but is a moment of course correction. A person in crisis should not be silently funneled toward escalation if there is still time and space to pull them back, and doing so should not be a criminal offense.

Due process and error correction matter here too. False positives are not abstract, and are extremely dangerous when police get involved. When an automated system misreads lawful but messy human expression as dangerous intent, the consequences can be severe: traumatic police interactions, criminal charges, reputational damage, chilling effects on lawful speech, and lasting distrust in both technology and public institutions. Any reporting regime that takes safety seriously must take mistakes just as seriously. The Blackstone ratio matters here immensely. It is better to let guilty people walk free than to surveil innocents and make them panic for things they didn’t intend to do. If a reporting threshold exists, users should know in general terms what it is. They should know what kinds of interactions may trigger review, what kinds of patterns could lead to referral, and what safeguards are in place to protect context. They should not have to guess whether a hypothetical discussion, an emotional vent, a fictional writing exercise, or a badly phrased question will someday be re-read under a harsher standard than the one that existed when it was written.

Transparency should not be optional here, and would provide credence and legitimacy to Canada’s online harms regime. OpenAI and other frontier labs should publish clear policies accessible to laypeople, quarterly transparency reports, aggregate referral numbers, and meaningful explanations of how de-escalation, human review, and law-enforcement escalation actually work. Governments should not pressure companies into secretive practices that the public only learns about after a scandal. If Canada wants trust, it needs visibility.

AI labs also need absolute data minimization. Companies should not retain sensitive conversational material longer than necessary, and they should not quietly normalize retroactive fishing expeditions through old chats under ever-expanding standards. Retrospective scrutiny is exactly how a safety regime turns into an authoritarian panopticon. Canada should also push privacy by design wherever feasible: minimal retention, local processing for especially sensitive use cases, and technical architectures that reduce unnecessary provider access to intimate conversational data in the first place. Ideally, all AI chats should be e2ee (end to end encrypted) so that nobody, not even an AI lab or the government, can read them. This is technically feasible with homomorphic encryption, but the political will for complete privacy isn’t here, and governments feel compelled to collect data.

Civil society should be especially alert here. Once a corporate reporting pipeline is built, the pressure to widen it will not disappear. It will grow. Every future tragedy will generate demands for broader thresholds, more aggressive flagging, more retention, and more information sharing. If the legal and ethical guardrails are weak at the outset, they will not get stronger under stress. Canada still has time to choose a different model.

In practical terms, that means adopting a framework with a high reporting threshold tied to credible imminent violence, de-escalation first wherever feasible, public transparency reports, independent auditing, minimal data retention, strong human review before any referral, allowing the user to back out without incriminating themselves, and a clear rejection of broad retroactive trawling through old chats. Those are not anti-safety measures, but what mature digital safety should look like in a free society like Canada.

Once a society teaches people to assume that every private conversation with a machine may become a report, it is not merely regulating technology. It is training the public to live inside a quieter, softer, more automated form of surveillance. Ottawa should reject that path now, while it still can. Canada can build an AI safety regime that helps stop real violence without teaching millions of ordinary users to live under a digital suspicion system.

Privacy is the standard a free society should insist on.

Noah Weinberger is an American-Canadian AI policy researcher and neurodivergent advocate currently studying at Concordia University. As an autistic individual, Noah explores the intersection of technology and mental health, focusing on how AI systems can augment emotional well-being. He has written on AI ethics and contributed to discussions on tech regulation, bringing a neurodivergent perspective to debates often dominated by neurotypical voices.

Community

Sign up or log in to comment