Canadian Officials Claim OpenAI Violated Federal And Provincial Privacy Laws

Philippe Dufresne, Canada’s Privacy Commissioner, found that OpenAI was “not compliant” with Canadian federal and provincial privacy laws in training its AI models. Following an investigation, Dufresne and his counterparts in Alberta, Quebec and British Columbia say OpenAI’s approach to data collection and consent draws on several laws, including Canada’s Personal Information Protection and Electronic Documents Act (PIPEDA), which governs how companies collect and use personal information in the normal course of their business.
Commissioners participating in the inquiry identified several privacy concerns with OpenAI’s approach, including that the company “collected large amounts of personal information without adequate safeguards to prevent that information from being used to train its models”, and that it did not obtain consent to collect and use that personal information in the first place. ChatGPT’s warnings indicate that AI interactions could be used in training, but the third-party data that OpenAI has purchased or scraped also includes personal details that people probably aren’t even aware of. The fact that ChatGPT users had no way to access, correct or delete this data was another problem identified by the commissioners, according to a summary of the investigation’s findings, along with OpenAI’s lackluster attempts to acknowledge the inaccuracy of some of ChatGPT’s responses.
Canada’s Privacy Commissioner says OpenAI has been open and responsive to the investigation and has already committed to making multiple changes to ChatGPT to comply with Canadian privacy laws. OpenAI removed its previous models that violated Canadian privacy regulations and now uses “a filtering tool to detect and hide personal information (such as names or phone numbers) in publicly available internet data and licensed datasets used to train its models,” the commissioner said. The company also agreed, within the next three months, to add a new notice to the offline version of ChatGPT explaining that chats may be used for training purposes and that sensitive information should not be shared, and within the next six months:
While the Canadian investigation into OpenAI’s privacy policies was initiated in 2023, the company has faced increased scrutiny from regulators more recently due to its connection to the mass shooting at Tumbler Ridge in February 2026. OpenAI reportedly flagged the suspected shooter’s account in 2025 for containing warnings of real-world violence, but failed to raise these concerns with law enforcement Canadian. Following the shooting, regulators asked the company to change its approach to security, and OpenAI ultimately agreed to collaborate more with Canadian law enforcement and health agencies in the future.


