Those claiming AI training on copyrighted works is "theft" misunderstand key aspects of copyright law and AI technology. Copyright protects specific expressions of ideas, not the ideas themselves. When AI systems ingest copyrighted works, they're extracting general patterns and concepts - the "Bob Dylan-ness" or "Hemingway-ness" - not copying specific text or images.
This process is akin to how humans learn by reading widely and absorbing styles and techniques, rather than memorizing and reproducing exact passages. The AI discards the original text, keeping only abstract representations in "vector space". When generating new content, the AI isn't recreating copyrighted works, but producing new expressions inspired by the concepts it's learned.
This is fundamentally different from copying a book or song. It's more like the long-standing artistic tradition of being influenced by others' work. The law has always recognized that ideas themselves can't be owned - only particular expressions of them.
Moreover, there's precedent for this kind of use being considered "transformative" and thus fair use. The Google Books project, which scanned millions of books to create a searchable index, was ruled legal despite protests from authors and publishers. AI training is arguably even more transformative.
While it's understandable that creators feel uneasy about this new technology, labeling it "theft" is both legally and technically inaccurate. We may need new ways to support and compensate creators in the AI age, but that doesn't make the current use of copyrighted works for AI training illegal or unethical.
For those interested, this argument is nicely laid out by Damien Riehl in FLOSS Weekly episode 744. https://twit.tv/shows/floss-weekly/episodes/744
You know, those obsessed with pushing AI would do a lot better if they dropped the patronizing tone in every single one of their comments defending them.
It's always fun reading "but you just don't understand".
On the other hand, it's hard to have a serious discussion with people who insist that building a LLM or diffusion model amounts to copying pieces of material into an obfuscated database. And then having to deal with the typical reply after explanation is attempted of "that isn't the point!" but without any elaboration strongly implies to me that some people just want to be pissy and don't want to hear how they may have been manipulated into taking a pro-corporate, hyper-capitalist position on something.
I don't get your comment, are the pro corporate for AI or against it?
I have no personal interest in the matter, tbh. But I want people to actually understand what they're advocating for and what the downstream effects would inevitably be. Model training is not inherently infringing activity under current IP law. It just isn't. Neither the law, legislative or judicial, nor the actual engineering and operations of these current models support at all a finding of infringement. Effectively, this means that new legislation needs to be made to handle the issue. Most are effectively advocating for an entirely new IP right in the form of a "right to learn from" which further assetizes ideas and intangibles such that we get further shuffled into endstage capitalism, which most advocates are also presumably against.
I'm pretty sure most people are just mad that this is basically "rules for thee but not for me", why should a company be free to pirate but I can't? Case in point is the internet archive losing their case against a publisher. That's the crux of the issue.
I get that that's how it feels given how it's being reported, but the reality is that due to the way this sort of ML works, what internet archive does and what an arbitrary GPT does are completely different, with the former being an explicit and straightforward copy relying on Fair Use defense and the latter being the industrialized version of intensive note taking into a notebook full of such notes while reading a book. That the outputs of such models are totally devoid of IP protections actually makes a pretty big difference imo in their usefulness to the entities we're most concerned about, but that certainly doesn't address the economic dilemma of putting an entire sector of labor at risk in narrow areas.