ChatGPT creator faces a billion-dollar lawsuit in Canada for using media content

0comments
OpenAI's logo displayed over a background of green and purple stripes.
As AI continues to evolve at lightning speed, concerns about how companies train their models have been growing. OpenAI, the maker of ChatGPT, is now facing heat in Canada and has been accused of violating copyright laws by using content from news media outlets for profit.

Five major Canadian news organizations have taken legal action against OpenAI, alleging regular copyright violations and breaches of online terms of use. The lawsuit, filed on November 29, is backed by The Globe and Mail, The Canadian Press, CBC/Radio-Canada, Torstar, and Postmedia.

Canada's top news outlets have joined forces to accuse OpenAI of unlawfully using their articles to train its AI models.


– Joint statement by the five Canadian news organizations, November 2024

The group is demanding C$20,000 (around $14,300 when directly converted) in punitive damages for every article they claim was unlawfully used to train ChatGPT. If proven, the total compensation could reach billions.

The media outlets are also pushing for OpenAI to hand over a share of the profits earned from their articles. Additionally, they're seeking a court order to block the company from using their content moving forward.

Chatbots like ChatGPT rely on publicly available online data for training, with content from newspapers often being part of this data-scraping process. OpenAI defends its methods, stating that its models are trained on publicly available information, adhering to fair use and international copyright principles meant to respect creators' rights.

This case joins a growing list of lawsuits against OpenAI and other tech companies by authors, artists, music publishers, and other copyright holders over the use of their work to train generative AI systems. Earlier this year, similar lawsuits were filed against OpenAI in the US, including cases brought by The New York Times and other media outlets.

Recommended Stories
Personally, I think stricter regulations are needed to ensure tech companies don't exploit people's work or personal data when training AI models. It's a growing concern, and it's not just limited to the US and Canada. For example, the European Union has already launched an investigation into how Elon Musk's X is training its models, while Meta's AI systems, which power Facebook, Instagram, and WhatsApp's AI assistant, are still not available in the EU due to similar issues.

Recommended Stories

Loading Comments...
FCC OKs Cingular\'s purchase of AT&T Wireless