Ottawa's AI regulation died on the order paper in 2025. Its replacement is a 30-day industry consultation that 160 civil society groups rejected as a 'facade for manufacturing consent.'
The polling is not ambiguous. In an August 2025 Leger survey, 85% of Canadians said governments should regulate AI tools to ensure ethical and safe use. In a November 2025 North Poll survey, 60% said they would prefer the government to be skeptical and ensure Canadians are not harmed or deceived by the technology. When asked what AI Minister Evan Solomon should prioritize, 60% said legislation to ensure AI is used ethically and safely. Only 24% said the priority should be removing regulation so the industry can compete with the United States.1
The government heard those numbers and moved in the opposite direction.
Canada had an AI regulation bill. It was called the Artificial Intelligence and Data Act — AIDA — and it was introduced in June 2022 as part of Bill C-27, bundled together with privacy reform. The bill proposed a regulatory framework for high-impact AI systems, enforced by a new AI and Data Commissioner. It spent two and a half years in committee. It was controversial — critics said the bill was vague on what constituted “high-impact” AI, excluded government use entirely, and was drafted without meaningful public consultation. But it was something.2
In January 2025, when Justin Trudeau resigned and Parliament was prorogued, Bill C-27 died on the order paper. AIDA died with it. Canada’s first attempt at comprehensive AI regulation evaporated overnight — not because it was voted down, but because the government let Parliament collapse before the bill could pass.3
Canada is now one of the few G7 countries without a federal AI regulatory framework in force. The European Union has the AI Act. The United States has state-level laws and a White House executive order. Canada has PIPEDA — a privacy law written in 2000 — and a voluntary code of conduct on generative AI that carries no legal force.
Under Mark Carney, the Liberal government appointed Evan Solomon as Canada’s first Minister of Artificial Intelligence and Digital Innovation. Solomon has been explicit about his priorities. He has said he intends to depart from “over-indexing” on harm prevention. His stated approach to regulation is that it must be “light, tight, and right.” He has described the government’s guiding principle as “AI for all” and built his strategy around four pillars: scaling companies, encouraging adoption, promoting digital sovereignty, and building infrastructure.4
What is missing from that list is regulation, safety, or rights.
85% want regulation. The minister says the priority is adoption.
In September 2025, Solomon launched a 30-day “national sprint” to develop a refreshed AI strategy. He appointed a 27-member task force and gave them one month to report back with recommendations. Canadians had the same 30 days to participate in a public consultation through a 26-question online survey. Of those 26 questions, three dealt with safety and public trust. The remaining 23 focused on research, talent, adoption, commercialization, and scaling the AI industry.5
The task force was criticized for being weighted toward industry. University of Ottawa law professor Teresa Scassa, Canada Research Chair in information law and policy, said the makeup was “skewed towards industry voices and the adoption of AI technologies.” She added that instructing task force members to consult their own networks “sounds a lot like insider networking, which should frankly raise concerns.”6
More than 160 academics, lawyers, civil liberties groups, and human rights organizations signed an open letter rejecting the sprint entirely. The signatories included PEN Canada, the BC Civil Liberties Association, the Women’s Legal Education and Action Fund, the Canadian Centre for Policy Alternatives, and the International Civil Liberties Monitoring Group. They called the process “a facade for manufacturing consent for a harmful preordained agenda” and refused to participate.7
❝ This seems like upping the ante on moving fast and breaking things. And so we thought we deserve better, the Canadian public deserves better.
— Cynthia Khoo, tech lawyer and open letter signatory, January 2026Instead, they launched an independent “People’s Consultation on AI” — a parallel process designed to capture the concerns the government’s sprint did not ask about: environmental impacts, labour rights, mental health effects, the proliferation of non-consensual deepfakes, privacy risks, and the documented inaccuracies of generative AI output.
Tech lawyer Cynthia Khoo, one of the letter’s signatories, said the sprint approach was “upping the ante on moving fast and breaking things” and that Canadians “deserve better.”
❝ A robot telling a robot what to do for the policy, which is a few levels distinct from actually talking to Canadians.
— Alex Kohut, founder, North Poll Strategies, on the AI consultation processPollster Alex Kohut flagged another problem. The government’s 26-question survey was long, included required open-ended questions, and was likely completed primarily by industry stakeholders working during business hours. Some respondents may have used AI to fill it out — and with the government using AI to process the responses, Kohut said, the result could be “a robot telling a robot what to do for the policy, which is a few levels distinct from actually talking to Canadians.”8
The pattern is now familiar. The Liberal government spent $2.4 billion on AI infrastructure in the 2024 budget. It launched a Canadian AI Safety Institute. It created a voluntary code of conduct. It appointed Canada’s first AI minister. What it has not done — in four years since tabling AIDA — is pass a single binding law that governs how AI systems can be used in Canada, what rights Canadians have when AI makes decisions about them, or what obligations AI companies have to the public.
Eighty-five percent of Canadians want AI regulation. The government gave them a 30-day survey weighted toward industry, a task force that civil society groups called unrepresentative, and a minister who says the priority is adoption, not harm prevention. The bill that was supposed to regulate AI died because the government let Parliament collapse. No replacement has been tabled. Canada is running its AI policy on a voluntary code of conduct and a minister who says regulation must be “light.” The question is not whether Ottawa is listening to Canadians on AI. The polling answers that. The question is why it is choosing not to.
Every source. Every contradiction. Yours to share.