this post was submitted on 15 Jun 2023
0 points (NaN% liked)

Furry Technologists

1280 readers
1 users here now

Science, Technology, and pawbs

founded 1 year ago
MODERATORS
 

Who'd of thunk it :)

top 2 comments
sorted by: hot top controversial new old
[–] southernwolf@pawb.social 0 points 1 year ago (1 children)

Thing is, this isn't really how AI training works and it can be easily done on the outputs of other AI. That's actually what Standford used to train their (comparably) small LLM that was very competent, despite its size. It was trained on the outputs of GPT (iirc) and held it's own much better than other models in a similar category, which is also what opened up the doors to smaller, more specialized models being useful, rather than giant ones like GPT.

Now, image generation via diffusion might be more troublesome, but that's fairly easily mitigated through several means, including a human or automated discriminator, which basically becomes a pseudo form of a GAN. There's also other processes that exist for this that aren't as affected (from what I know at least), such as GANs. But given most image AI's are trained on stuff like LAION, AI images being uploaded online will have no effect on that, not for quite a while at least, if ever.

[–] ParsnipWitch@feddit.de 0 points 1 year ago

This prediction is based upon AI being trained on exclusively AI content for a long time. There is no example for that yet.