The New York Times is suing OpenAI and Microsoft for copyright infringement, claiming the two companies built their AI models by “copying and using millions” of the publication’s articles and now “directly compete” with its content as a result.
As outlined in the lawsuit, the Times alleges OpenAI and Microsoft’s large language models (LLMs), which power ChatGPT and Copilot, “can generate output that recites Times content verbatim, closely summarizes it, and mimics its expressive style.” This “undermine[s] and damage[s]” the Times’ relationship with readers, the outlet alleges, while also depriving it of “subscription, licensing, advertising, and affiliate revenue.”
The complaint also argues that these AI models “threaten high-quality journalism” by hurting the ability of news outlets to protect and monetize content. “Through Microsoft’s Bing Chat (recently rebranded as “Copilot”) and OpenAI’s ChatGPT, Defendants seek to free-ride on The Times’s massive investment in its journalism by using it to build substitutive products without permission or payment,” the lawsuit states.
The full text of the lawsuit can be found here
They don’t “remember” anything they produce a “awnser” by generating a shit load of math wich renders down to the most “helpful” answer it can statistically give you.
LLMs are neuronal networks, if you know how they work you know how idiotic all copyright claims are, they all just mad that their shit is getting obsolete and in the background use the engine to do “work” wich they claim to have violated their copyright, now they are mad because it does a better job at writing than they do and they fear of being replaced.
All lawsuits against AI companies, regarding copyright of training data, are dumb as hell.
You are right about the commercial/non profit training data part, but from my understanding that’s basically a gray zone and politics are to slow to keep up with tech.
Btw fuck Open AI, they are as open as a fucking Supermax prison. Even the programmers don’t know what their main LLM does, they just place a simple one between the user and the actual GPT to make shure that it doesn’t give people instructions on how to build a bomb and stuff like that or to keep people from making it say bad words…
that’s the theory. previous models also were supposed to be doing 3 digit math but they dicovered that the questions were in the training data.
so you should look into what happens when people ask chat gpt to repeat a word forever, it prints the word for a while and then prints training data, check this link https://www.404media.co/google-researchers-attack-convinces-chatgpt-to-reveal-its-training-data/
edit: relevant part:
I should also reiterate that I agree that the intent is to avoid memorization, but they are not successful yet.