I've recently noticed this opinion seems unpopular, at least on Lemmy.
There is nothing wrong with downloading public data and doing statistical analysis on it, which is pretty much what these ML models do. They are not redistributing other peoples' works (well, sometimes they do, unintentionally, and safeguards to prevent this are usually built-in). The training data is generally much, much larger than the model sizes, so it is generally not possible for the models to reconstruct random specific works. They are not creating derivative works, in the legal sense, because they do not copy and modify the original works; they generate "new" content based on probabilities.
My opinion on the subject is pretty much in agreement with this document from the EFF: https://www.eff.org/document/eff-two-pager-ai
I understand the hate for companies using data you would reasonably expect would be private. I understand hate for purposely over-fitting the model on data to reproduce people's "likeness." I understand the hate for AI generated shit (because it is shit). I really don't understand where all this hate for using public data for building a "statistical" model to "learn" general patterns is coming from.
I can also understand the anxiety people may feel, if they believe all the AI hype, that it will eliminate jobs. I don't think AI is going to be able to directly replace people any time soon. It will probably improve productivity (with stuff like background-removers, better autocomplete, etc), which might eliminate some jobs, but that's really just a problem with capitalism, and productivity increases are generally considered good.
The output of a LLM is analogous to re-saving an image as a lo res JPEG. Data is being processed and altered using statistics, but nothing "new" is being created, only lower quality derivatives. That's why you can't train a LLM on the output of a LLM.
This is actually a decent argument, but there has to be a threshold. For instance, if I take the average of all RGB values in an image, and distribute a pixel with the average, is that breaking copyright or somehow immoral?
I recently looked into the speculated model-size and speculated training set size of GPT and Stable Diffusion, and it does appear that if you thought of them as compression algorithms, they'd only be doing something like 1:7 compression. These ratios aren't outlandish for lossy compression.
Compression and redistribution isn't the (stated) goal of these models. Hypothetically, these models are learning patterns and associations of things like styles and how humans write text. And they appear to do things a little beyond just copying and pasting. So, hypothetically, a lot of the model size could mostly consist of learned styles and human preferences, rather than just a compressed database of the images it was trained on. I guess the real test is trying to prompt the models to reproduce an item in its training set, and evaluating how similar it is.