this post was submitted on 14 Sep 2024
595 points (97.8% liked)

Technology

59599 readers
3178 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

Surprised pikachu face

you are viewing a single comment's thread
view the rest of the comments
[–] Trantarius@lemmy.dbzer0.com 5 points 2 months ago (1 children)

Other than citing the entire training data set, how would this be possible?

[–] UnderpantsWeevil@lemmy.world 1 points 2 months ago (1 children)

The entire training set isn't used in each permutation. Your keywords are building the samples based on metadata tags tied back to the original images.

If you ask for "Iron Man in a cowboy hat", the toolset will reach for some catalog of Iron Man images and some catalog of cowboy hat images and some catalog of person-in-cowboy-hat images, when looking for a basis of comparison as it renders the image.

These would be the images attributed to the output.

[–] Trantarius@lemmy.dbzer0.com 2 points 2 months ago

Do you have a source for this? This sounds like fine-tuning a model, which doesn't prevent data from the original training set from influencing the output. The method you described would only work if the AI is trained from scratch on only images of iron man and cowboy hats. And I don't think that's how any of these models work.