this post was submitted on 11 Oct 2024
568 points (99.5% liked)

Technology

58613 readers
4140 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

Wikipedia has a new initiative called WikiProject AI Cleanup. It is a task force of volunteers currently combing through Wikipedia articles, editing or removing false information that appears to have been posted by people using generative AI.

Ilyas Lebleu, a founding member of the cleanup crew, told 404 Media that the crisis began when Wikipedia editors and users began seeing passages that were unmistakably written by a chatbot of some kind.

you are viewing a single comment's thread
view the rest of the comments
[โ€“] schizo@forum.uncomfortable.business 211 points 18 hours ago (17 children)

Further proof that humanity neither deserves nor is capable of having nice things.

Who would set up an AI bot to shit all over the one remaining useful thing on the Internet, and why?

I'm sure the answer is either 'for the lulz' or 'late-stage capitalism', but still: historically humans aren't usually burning down libraries on purpose.

[โ€“] Petter1@lemm.ee 1 points 14 hours ago* (last edited 14 hours ago) (4 children)

Maybe a strange way of activism that is trying to poison new AI models ๐Ÿค”

Which would not work, since all tech giants have already archived preAI internet

[โ€“] schizo@forum.uncomfortable.business 6 points 14 hours ago (3 children)

Ah, so the AI version of the chewbacca defense.

I have to wonder if intentionally shitting on LLMs with plausible nonsense is effective.

Like, you watch for certain user agents and change what data you actually send the bot vs what a real human might see.

[โ€“] Dragonstaff@leminal.space 1 points 12 hours ago

I suspect it would be difficult to generate enough data to intentionally change a dataset. There are certainly little holes, like the glue pizza thing, but finding and exploiting them would be difficult and noticing you and blocking you as a data source would be easy.

load more comments (2 replies)
load more comments (2 replies)
load more comments (14 replies)