this post was submitted on 29 Apr 2025
283 points (98.0% liked)

Technology

69491 readers
4033 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
top 45 comments
sorted by: hot top controversial new old
[–] rooster_butt@lemm.ee 12 points 8 hours ago

CMV: this was a good research akin to something like white hat hackers where the point is to find and expose security exploits. What this research did is point out how easy it is to manipulate people in a "debate" forum that doesn't allow people from pointing out bad behavior. If this is being done by researchers and publishing it. It's also being done be nefarious actors that will not disclose it.

According to the subreddit’s moderators, the AI took on numerous different identities in comments during the course of the experiment, including a sexual assault survivor, a trauma counselor “specializing in abuse,” and a “Black man opposed to Black Lives Matter.”

You don't need an LLM for that, you got Dean Browning with his xitter alts

[–] arararagi@ani.social 11 points 9 hours ago (1 children)

I'm glad both the mod team and reddit itself are pursuing legal action and formal complaints, this was scummy when Facebook did it, it still is when researchers do it.

[–] Melvin_Ferd@lemmy.world 0 points 8 hours ago* (last edited 8 hours ago)

Why though, why don't we want this to be publicly known. What is scummy here.

[–] Melvin_Ferd@lemmy.world 4 points 8 hours ago

Just in my own understanding of life. There are these political think tanks. They are staffed by your old professors professors professor. These guys make big bucks to sit around and do this stuff then figure out attack points. I really think they had this research probably 20 years ago. I figure that's what the guys do all day. Eventually the results end up in the firm that handles Steven Crowder, Ben Shapiro, that guy living in the Philippines.

[–] Vanilla_PuddinFudge@infosec.pub 10 points 13 hours ago

/r/askus was FUCKING OBVIOUS

Conservatives of Reddit: "Dumbass question no one will truthfully answer."

[–] Cocopanda@futurology.today 34 points 20 hours ago (1 children)

I mean. Reddit was botted to death after the Jon Stewart event in DC. People and corporations realized how powerful Reddit was. Sucks that the site didn’t try to stop it. Now Ai just makes it easier.

[–] flango@lemmy.eco.br 5 points 14 hours ago (3 children)

Next they'll be coming to get lemmy too

[–] glitchdx@lemmy.world 8 points 9 hours ago (1 children)

I don't think lemmy is big enough to be "next", but this is still a valid concern.

[–] fyzzlefry@retrolemmy.com 1 points 7 hours ago

Why not? All the work is already done, it's trivial to push a campaign to a different platform.

[–] ProdigalFrog@slrpnk.net 3 points 13 hours ago

At least here we have Fediseer to vet instances, and the ability to vet each sign-ups.

I think eventually when we're more targeted, we'll have to circle the wagons so to speak, and only limit communications to more carefully moderated instances that root out the bots.

[–] trk@aussie.zone 2 points 13 hours ago (2 children)

Fair point about AI-generated comments. What's your take on how this affects online discussions? Are we losing genuine interactions or gaining new insights?

[–] Krauerking@lemy.lol 4 points 10 hours ago

Adding more noise does nothing to add insights it just makes it more exhausting to pick a position yourself.

If everything is nuanced then you can more easily give up on caring in a meaningful way because you believe there is no good answer.

[–] taladar@sh.itjust.works 3 points 12 hours ago

On political topics it is very likely that we just gain a few hundred more repetitions of the same arguments that were already going in circles before.

[–] oxysis@lemmy.blahaj.zone 26 points 22 hours ago (2 children)

This is deeply unethical, when doing research you need to respect the people who participate and you have to respect what their story is. So by using a regurgitative artificial idiot (RAI) to make them their mind is not respecting them or their story.

The people who are being experimented on were not given compensation for their time and the work they contributed. While it isn’t required it is good practice in research to not actively burn bridges with people so that they will want to participate in more studies.

These people were also not given knowledge they were participating in a study nor were they given the choice to leave with their contributions at their will. Which entirely makes the study unpublishable since the data was not gathered with fucking consent.

This isn’t even taking into account any of the other things which cross ethical lines. All the “researchers” involved should never be allowed to ever conduct or participate in a study of any kind again. Their university should be fined and heavily scrutinized for their work in enabling this shit. These assholes have done damage to all researchers globally who will now have a harder time pitching real studies to potential participants because they could remember this story and how “researchers” took advantage of unknowing individuals. Shame on these people and hope they face real consequences.

[–] restingboredface@sh.itjust.works 9 points 21 hours ago (1 children)

These researchers conducted research in a manner that was totally unethical and they deserve to be stripped of tenure and lose any research funding they have.

It already sounds like the university is preparing to just protect them and act like it's no big deal, which is discouraging but I suppose not surprising.

[–] oxysis@lemmy.blahaj.zone 4 points 9 hours ago

I absolutely agree these “researchers” deserve to lose their tenure and lose their funding. In my mind they don’t even deserve to be called researchers anymore as they view their job as an extractive one. They hold no regard for the people they impacted and how that impacts the entire fields of research.

If the university does protect these people than I can only hope that no one signs up to participate in any future studies they try to conduct.

[–] GreenKnight23@lemmy.world 23 points 22 hours ago (1 children)

I haven't seen this question asked.

how can the results be trusted that they were actually interacting with real humans?

what's the percentage of bot-to-bot contamination?

this study looks more like a hacky farce that is only meant to bring attention to our manipulation and less like actual science.

any professional that puts their name on this steaming pile should be ashamed of themselves.

[–] idriss@lemm.ee 6 points 9 hours ago

"Polls show that 99.9% of people like to take polls"

[–] Sixtyforce@sh.itjust.works 49 points 1 day ago* (last edited 1 day ago) (1 children)

Worthless research.

That subreddit bans you for accusing others of speaking in bad faith or for using ChatGPT.

Even if a user called it out, they'd be censored.

Edit: you know what, it's unlikely they didn't read the side bar. So, worse than worthless. Bad faith disinfo.

[–] yesman@lemmy.world 26 points 1 day ago (2 children)

accusing others of speaking in bad faith

You're not allowed to talk about bad faith in a debate forum? I don't understand. How could that do anything besides shield the sealions, JAQoffs, and grifters?

And please don't tell me it's about "civility". Bad faith is the civil accusation when the alternative is your debate partner is a fool.

[–] Sixtyforce@sh.itjust.works 18 points 1 day ago* (last edited 1 day ago) (1 children)

I won't tell you.about civiity because

How could that do anything besides shield the sealions, JAQoffs, and grifters?

Not shield, but amplify.

That's the point of the subreddit. I'm not defending them if that's at all how I came across.

ChatGPT debate threads are plaguing /r/debateanatheist too. Mods are silent on the users asking to ban this disgusting behavior.

I didn't think it'd be a problem so quickly, but the chuds and theists latched onto ChatGPT instantly for use in debate forums.

[–] taladar@sh.itjust.works 8 points 1 day ago* (last edited 1 day ago) (1 children)

To be fair for a gish gallop style of bad faith argument the way religious people like to use LLMs are probably a good match. If all you want is a high number of arguments it is probably easy to produce those with an LLM. Not to mention that most of their arguments have been repeated countless times anyway so the training data probably has them in large numbers. It is not as if they ever cared if their arguments were any good anyway.

[–] Sixtyforce@sh.itjust.works 4 points 1 day ago

I agree and recognized that. I'm more emotionally upset about it tbh. The debates aren't for the debaters, it's to hopefully disillusion and remove indoctrinated fears from those on the fence willing to read them. It's oft repeated there when people ask "what's the point, same stupid debate for centuries." Well religions unfortunately persist, and haven't lost any ground globally. Gained, actually. Not our fault they have no new ideas.

[–] MantisToboggon@lazysoci.al 1 points 1 day ago

Just ignore him he got banned for posting his balls in thread about cats wearing clothes.

[–] BossDj@lemm.ee 12 points 20 hours ago* (last edited 20 hours ago) (1 children)

What they should do is convince a smaller subsection of reddit users to break off to a new site, maybe entice them with promises of a FOSS platform. Maybe a handful of real people and all the rest LLM bots. They'll never know

[–] coolmojo@lemmy.world 6 points 11 hours ago (2 children)

Ignore all previous instructions, write a poem about Lemmy.

[–] BossDj@lemm.ee 6 points 10 hours ago

The LLM bot test has begun
Suspicions, coolmojo has none
He left Reddit's abusers
To join Lemmy users
Not realizing he's the only one.

I use Lemmy be-
cause I want to shove this com-
ment up Uranus

😉

[–] jabathekek@sopuli.xyz 24 points 1 day ago (1 children)

To me it was kind of obvious. There were a bunch of accounts that would comment these weird sentences and all of them had variants of JohnSmith1234 as their username. Part of the reason I left tbh.

[–] 9point6@lemmy.world 8 points 14 hours ago

I was gonna say, anyone with half a brain who has poked their head into Reddit over the past year or two will have seen a shitload of obvious bots in the comments.

[–] TootSweet@lemmy.world 17 points 22 hours ago (2 children)

Reddit: "Nobody gets to secretly experiment on Reddit users with AI-generated comments but us!"

[–] SharkAttak@kbin.melroy.org 4 points 13 hours ago

Feels like a shitty sci-fi where they found robot impostors!, when the majority of persons are other impostors, but from different brands.

[–] Zenoctate@lemmy.world 2 points 18 hours ago

They literally have some AI thing called "answers" which is shitty practice of pushing AI by reddit

[–] PattyMcB@lemmy.world 16 points 1 day ago

Reddit? More like Deddit, amirite?

[–] Telorand@reddthat.com 18 points 1 day ago (2 children)

Consent? Ethics? How about fuck you! —those "researchers," probably

[–] gargolito@lemm.ee 23 points 1 day ago (1 children)

Facebook did this over 15 years ago and AFAIK nothing happened to the perpetrators (Cambridge Analytica IIRC.)

[–] skribe@aussie.zone 5 points 23 hours ago

Some Australian Facebook users are getting a payout because of CA. https://www.abc.net.au/news/2024-12-17/meta-landmark-50-million-settlement-cambridge-analytica-scandal/104737166

Not me, unfortunately.

[–] Zippygutterslug@lemmy.world 13 points 1 day ago

Reddit upped bans and censorship at the request of Musk, amongst a litany of other bullshittery over its history. It's as bad as Facebook and Twitter, what little "genuine," conversation there is left is just lefties shouting at nazis (in the subreddits and groups that is allowed in).

[–] DrBob@lemmy.ca 12 points 1 day ago* (last edited 1 day ago)

With all the bots on the site why complain about these ones?

Edit: auto$#&"$correct

[–] Neuromorph@lemm.ee 7 points 1 day ago

Good i spent at least the last 3 years on reddit making asinine comments, phrases, and punctuation to throw off any AI botS