Back

Canada's privacy commissioners say OpenAI breached federal and provincial data laws

Canada’s privacy commissioner, Philippe Dufresne, announced that OpenAI violated both federal and provincial privacy laws during the development of its artificial‑intelligence models. The conclusion follows a joint investigation with privacy regulators in Alberta, Quebec and British Columbia, which identified a pattern of data‑collection practices that ran afoul of the Personal Information Protection and Electronic Documents Act (PIPEDA) and comparable provincial statutes.

Commissioners said OpenAI gathered “vast amounts of personal information without adequate safeguards” and failed to obtain consent before using that data for model training. While ChatGPT displays a disclaimer that interactions may be used for training, the regulators pointed out that OpenAI also relied on third‑party datasets—scraped or purchased from the public internet—that contain personal details many individuals never knew were being harvested.

Another point of contention is the lack of user control. The commissioners noted that ChatGPT users cannot access, correct, or delete the personal data that may have been incorporated into the system’s knowledge base. Moreover, the agency criticized OpenAI’s “lackluster attempts” to acknowledge and correct inaccurate responses generated by the model.

OpenAI’s pledged reforms

OpenAI, which the commissioners described as “open and responsive,” has agreed to a slate of corrective actions. The company has already retired earlier model versions that the investigation deemed non‑compliant. It now employs a filtering tool designed to detect and mask personal identifiers—such as names and phone numbers—in publicly accessible internet data and licensed datasets used for training.

Within three months, OpenAI will add a new notice to the signed‑out version of ChatGPT warning users that their chats may be used for training and advising against sharing sensitive information. Within six months, the firm will simplify its data‑export tools and clarify how users can challenge the accuracy of the information ChatGPT provides. The company also pledged to confirm to the privacy commissioners that retired datasets are protected from future development use.

Additional safeguards include testing protective measures for minor relatives of public figures, ensuring the model denies requests to disclose their personal details. These steps aim to address the commissioners’ concerns about inadvertent exposure of private data.

The privacy probe, opened in 2023, gained renewed urgency after the February 2026 mass shooting in Tumbler Ridge, British Columbia. OpenAI had flagged the alleged shooter’s account in 2025 for containing violent threats but did not forward the warning to Canadian law‑enforcement agencies. Regulators subsequently demanded stronger safety protocols, and OpenAI agreed to collaborate more closely with law‑enforcement and health agencies moving forward.

While the commissioners acknowledged OpenAI’s cooperation, they emphasized that compliance with privacy legislation will remain a “continuous obligation.” The agency plans to monitor the company’s implementation of the agreed‑upon measures and will issue follow‑up reports as needed.

Used: News Factory APP - news discovery and automation - ChatGPT for Business

Source: Engadget

Also available in: