Canada Takes on ChatGPT
Canada's federal privacy regulator has released the long-awaited results of its investigation into OpenAI's ChatGPT — and the findings are drawing attention from tech companies, legal experts, and everyday Canadians who use the wildly popular AI chatbot.
The Office of the Privacy Commissioner of Canada (OPC) launched its probe to determine whether ChatGPT complies with the Personal Information Protection and Electronic Documents Act (PIPEDA) — the federal law that governs how private-sector organizations collect, use, and disclose Canadians' personal information.
Why This Investigation Matters
ChatGPT, developed by San Francisco-based OpenAI, has become one of the most widely used AI tools in the world, with tens of millions of users in Canada and globally. But the technology raises serious questions about privacy: large language models are trained on enormous datasets scraped from the internet, which may include personal information people never consented to share with an AI system.
The OPC's investigation centred on whether OpenAI obtained meaningful consent from individuals whose data was used to train its models, whether the company adequately protects that information, and whether Canadians have any recourse to access or correct data held about them.
These aren't abstract concerns. Privacy advocates have long argued that the way AI companies vacuum up internet data to build their models runs roughshod over established data protection principles — principles that Canadian law is supposed to uphold.
Canada's Leverage in the AI Era
Canada has positioned itself as a serious player in the global AI governance conversation. The federal government has been working on the Artificial Intelligence and Data Act (AIDA) as part of Bill C-27, though that legislation has faced a lengthy path through Parliament. In the meantime, existing privacy laws like PIPEDA are the main tools regulators have to hold AI companies accountable.
The OPC's investigation into ChatGPT is part of a broader international push — regulators in the European Union, Italy, and other jurisdictions have also scrutinized OpenAI's practices. Canada coordinating with global counterparts signals that Ottawa (both the city and the government) understands this is a borderless problem requiring cross-border solutions.
What Could Change
Depending on the findings, the investigation's outcome could have meaningful consequences for how OpenAI — and AI companies broadly — operate in Canada. The OPC can recommend changes to data practices and, under PIPEDA, refer matters to Federal Court if companies refuse to comply.
For Canadian businesses and institutions that have adopted ChatGPT — schools, law firms, government agencies — the results also serve as a signal of where the regulatory line sits.
For everyday Canadians who use ChatGPT to draft emails, summarize documents, or answer questions, this investigation is a reminder that free tools aren't cost-free. Personal data is the currency, and how it's used matters.
The Bigger Picture
As AI becomes embedded in daily life, questions about consent, transparency, and accountability aren't going away. Canada's investigation into ChatGPT is one of the most significant tests yet of whether existing privacy law can keep pace with rapidly evolving technology — and whether regulators have the teeth to enforce it.
Source: CBC News — Privacy investigation into OpenAI's ChatGPT
