this post was submitted on 16 Oct 2024
129 points (99.2% liked)
Fediverse
28503 readers
311 users here now
A community to talk about the Fediverse and all it's related services using ActivityPub (Mastodon, Lemmy, KBin, etc).
If you wanted to get help with moderating your own community then head over to !moderators@lemmy.world!
Rules
- Posts must be on topic.
- Be respectful of others.
- Cite the sources used for graphs and other statistics.
- Follow the general Lemmy.world rules.
Learn more at these websites: Join The Fediverse Wiki, Fediverse.info, Wikipedia Page, The Federation Info (Stats), FediDB (Stats), Sub Rehab (Reddit Migration), Search Lemmy
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I think by default bots should not be allowed anywhere. But if that's a bridge too far, then their use should have to be regularly justified and explained to communities. Maybe it should even be a rule that their full code has to be released on a regular basis, so users can review it themselves and be sure nothing fishy is going on. I'm specifically thinking of the Media Bias Fact Checker Bot (I know, I harp on it too much). It's basically a spammer bot at this point, cluttering up our feeds even when it can't figure out the source, and providing bad and inaccurate information when it can. And mods refuse to answer for it.
Even large social media platforms have trouble dealing with bots, and with AI advancements, these bots will become more intelligent. It feels like a hopeless task to address. While you could implement rules, you would likely only eliminate the obvious bots that are meant to be helpful. There may be more sophisticated bots attempting to manipulate votes, which are more difficult to detect, especially on a federated platform.
For sure, it's not an easy problem to address. But I'm not willing to give up on it just yet. Bad actors will always find a way to break the rules and go under the radar, but we should be making new rules and working to improve these platforms in good faith, with the assumption that most people want healthy communities that follow the rules.
I’m particularly concerned about the potential for automods to become a problem on Lemmy, especially if it gains popularity like Reddit. I believe a Discourse-style trust level system could be a better approach for Lemmy’s moderation, but instead of rewarding “positive contributions,” which often leads to karma farming, the system should primarily recognize user engagement based on time spent on the platform and reading content. Users would gradually earn privileges based on their consistent presence and understanding of the community’s culture, rather than their ability to game the system or create popular content. This approach would naturally distribute moderation responsibilities among seasoned users who are genuinely invested in the community, helping to maintain a healthier balance between user freedom and community standards, and reducing the reliance on bot-driven moderation and arbitrary rule enforcement that often plagues many Reddit communities.
Grant users privileges based on activity level
That’s a very cool concept. I’d definitely be willing to participate in a platform that has that kind of trust system baked in, as long as it respected my privacy and couldn’t broadcast how much time I spend on specific things etc. Instance owners would also potentially get access to some incredibly personal and lucrative user data, so protections would have to be strict. But I guess there are a lot of ways to get at positive user engagement in a non-invasive way. I think it could solve a lot of current and potential problems. I wish I was confident the majority of users would be into it, but I’m not so sure.