this post was submitted on 05 Oct 2024
628 points (95.8% liked)
Not The Onion
12543 readers
514 users here now
Welcome
We're not The Onion! Not affiliated with them in any way! Not operated by them in any way! All the news here is real!
The Rules
Posts must be:
- Links to news stories from...
- ...credible sources, with...
- ...their original headlines, that...
- ...would make people who see the headline think, “That has got to be a story from The Onion, America’s Finest News Source.”
Comments must abide by the server rules for Lemmy.world and generally abstain from trollish, bigoted, or otherwise disruptive behavior that makes this community less fun for everyone.
And that’s basically it!
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I'm glad about this, honestly.
If you want to use an AI model trained on vast sums of publicly posted work, go for it, but be ready for the result to be made into a truly public work that you don't own at the end of it all.
I agree. I think the effective entry into the public domain of AI generated material, in combination with a lot of reporting/marking laws coming online is an effective incentive to keep a lot of material human made for large corporate actors who don't like releasing stuff from their own control.
What I'd like to see in addition to this is a requirement that content-producing models all be open source as well. Note, I don't think we need weird new IP rights that are effectively a "right to learn from" or the like.
I'm 100% in favor of requiring models to be open source. That's been my belief for a while now, because clearly, if someone wants to make an AI model off the backs of other people's work, they shouldn't be allowed to restrict or charge access to those models to the same people who had their work used, let alone other people more broadly.