Wander

joined 1 year ago
MODERATOR OF
 

I'm trying to understand how an app would even get that info in the first place, how that's classified and why a mobile operating system even has a way to provide that data.

Am I correct in assuming that if an app is used without play store / play store framework that it would not be able to get access to that data?

Thanks!

 

Yikes.

 

There's some heated discussion going on about a community dedicated to Donald Trump and, a few days ago, we started receiving concerning news about bot infested instances.

I believe there's three additional settings for instance administrators which Lemmy could implement in the future and it's probably worth discussing first. These would be :

  1. Block specific users or users from specific remote instances from creating new posts in local communities (or have them require approval).
  2. Block specific users or users from specific remote instances from writing comments to posts in local communities.
  3. Block specific users or users from specific remote instances from upvoting or downvoting posts and/or comments in local communities.

These options alone could allow concerned instance admins prevent brigading and content manipulation without necessarily defederating which should be left as a last-resort.

As a small instance admin myself I would be shooting myself in the foot if I were to block a very large instance, but maybe it would make sense to require approval for posts from certain remote users.

That said. One concern I have is that by having too many options we could make things much more confusing. But right now we have too few moderation options and I can't think of other ways to keep federation while also having a way to prevent brigading or content manipulation.

 

Anyone well known who wants to speak out about what's been happening on reddit? Louis Rossmann? Apollo dev? John Oliver (one can dream)... or maybe former Reddit mods who were kicked out?

Anyone who has a story and who understands they'd have a massive impact by giving an exclusive AMA on a Lemmy or Kbin instance.

This could be announced a few days in advance to make sure all remote instances follow the AMA community.

 

.Apparently post titles parse mark down

 

Apologies for this second post. This one should be the last. I've continued thinking about the best way to handle moderation in the fediverse, especially because I believe that the fediverse as a whole will live or die based on how moderation is handled.

I've worked in the past with email products, email being one of the clearest examples of federated networks. Even though email is quite different since there's no public messages, I believe we can draw some inspiration regarding how to handle federation and moderation. More specifically regarding a key metric that also applies to the fediverse: the degree of "spamminess" of a message.

But let's first see some of the principles.

Principle 1: Local content, local values

This principle states that instance administrators should not be expected to host or make available content that goes against their values and beliefs. What is included in this principle:

  • Restricting certain content in local communities.
  • Restricting local users from displaying certain behavior, even when commenting in remote communities.
  • Silencing (not yet implemented) or removing individual remote communities so that they do not show in their list of communities or their feed, thus not giving them a platform.

Most important this principle incentivizes every user to find a home instance that matches their beliefs and values.

Principle 2: Everyone cleans their own turf

Reports made about remote users or content in remote instances should be handled first and foremost by remote community moderators and, in second instance, by remote admins.

Users should block content or users after reporting and enough time should be allowed for remote instances, especially smaller ones, to react to a report.

Local admins might act immediately if it's an urgent matter such as doxxing and private information, or they can issue a temp ban of, for example 3 days, during which the remote community has time to catch up. Ideally, however, local admins will not have to deal with issues caused by remote users on remote communities. This is because it's not feasible for smaller instances to do moderate the whole userbase of a large instance. We need to learn to delegate those reports and have them resolve remotely even if we receive them.

If the remote instance does not moderate effectively or according to our beliefs and values, the third principle comes into play.

Principle 3: Users and communities have a degree of spamminess and of utility.

Let's talk about spamminess first. Basically every interaction can be judged on its degree of "spamminess".

  • If a remote user forces itself into a community to send offensive messages, that's spam.
  • If a remote user is sharing their opinion in a thread created specifically to discuss such opinions, that's not spam.

Whether a comment is spammy or not depends on the nature of the interaction. If someone is going out of their way to cause drama, the interaction is probably spammy. Basically we need to ask ourselves if the user has been asked or is allowedto share their opinion.

Spammy users should be banned by the instance that hosts them. If that's not the case, then this could count as a strike against the user's instance. If a remote instance has been given enough time to fix issues and they still keep enabling spammy users, then it could be grounds for a block not unlike there's blocklist for mail servers that send spam.

Examples of spammy behavior:

  • Brigading
  • Trolling
  • Going out of their way to cause drama or irritate others
  • Concern trolling
  • And of course, not correctly setting NSFW flags These are all examples of forced / involuntary interactions that should be avoided at all costs.

Now what about remote users that display a controversial opinion in threads where they were asked to share such opinion (ie, they are civil)?

In that case the instance admin should ask themselves about the utility of this remote user or remote instance.

Provided they act in a civil manner and they are not spammy it might be reasonable to a) not act against them, b) silence them (not yet implemented on lemmy) or c) block them in local communities only (not yet implemented on lemmy).

A civil user with controversial opinion, depending on the context and what these opinions are might still have some utility. For example, they could contribute positively in other places with tech guides, interesting content, etc... and we do not want to be overzealous in blocking them. Maybe it's something we can leave up to each user (thus the importance of users learning to block).

Anyways, the idea here is that the admin team needs to make a judgement call on the perceived utility and decide which action is better. Given that the user is civil maybe a silence or a block on local communities suffices. This is all relative and every admin will need to decide on their own.

The most important point: whether a civil user with controversial opinions is banned, silenced or otherwise, the user's home instance should not be affected. Mostly. Let me explain.

Regarding instances themselves, they also have a utility score. For example, if an instance is solely dedicated to the support of values that I find strongly offensive, then there's little point in federating with them. It's unlikely that I'll get any net utility from either their users or communities.

However, this could be different with large general instances where maybe I'll end up flagging 1000 different users which are civil but have controversial opinions, but I still get utility from the other 99%.

Of course this only works if these remote users are not spammy. If a remote instance is large and enables spammy users as described above, then this 1% of users could very well cause me to block the whole instance, especially if we are constantly harassed by them. I suspect this is what could have happened recently with beehaw, but I don't want to get into that since this post is about general guidelines that I've been thinking about.

In summary, regarding principle 3:

  • Spammy users are bad actors that drastically lower the utility of the remote instance that hosts them
  • Instances that enable spammy users are bad actors have drastically low utility.
  • Remote users that are civil but have controversial opinions have lower utility, but action can be variable depending on context.
  • The severity of an action should depend on the utility of a user or an instance.

And this brings me to my last point: We instance admins need to be extremely realistic about the utility that our userbase derives from remote instances and remote users.

I can't emphasize this enough. Suppose I'm an instance admin and I see one of these civil users with controversial opinions in the wild, I can't fucking go on a crusade and threaten to defederate the whole instance because they allowed a discussion to happen the contents of which I don't agree with. I can't use my userbase as a blunt tool to threaten defederation from instances who don't have my same world view.

Referring back to the first principle, as instance admin it's understandable that I don't want to host or platform certain opinions and I need all the tools to block these remote communities, users and even instances that are solely or overwhelmingly dedicated to something I strongly oppose.

However, if we want federated alternatives to succeed it does not make sense for Gmail block Outlook because Sundar Pichai doesn't like Satya Nadella's world view / politics / opinions. That would be weaponizing of your userbase.

Which brings me to the last principle:

Principle 4: don't weaponize your userbase to try to impose your values

  • If you don't like X, Y, Z remote communities on your instance, hide them or block them (this covers the first principle of not giving a platform to content you strongly disagree with).
  • If a remote user is spammy or an instance enables spammy behavior, block them.
  • If a remote user or remote instance is dedicated so overwhelmingly to something that you and your users see no value in federating with them, silence them or block them.
  • However, instance admins should not threaten with defederation because a remote instance which otherwise has plenty of utility, has some aspects that they disagree with, especially if civility is maintained. At worst, that remote instance should be unlisted from public timelines or made "follower-only" (following the first principle), but not outright blocked.

The reason I bring this up is because I have a huge fear that we could end up waging petty wars and splitting up the fediverse, decreasing the overall usefulness of federated alternatives. If this happens we will never succeed.

In summary:

  • You're not forced to platform content you disagree with
  • Focus on moderating your instance and your users and ask other instances to keep theirs moderated.
  • Go harsh against spammy interactions, be moderate if it remains civil (unless utility is definitely negative. Be realistic and admit to yourself that a single controversial discussion won't eliminate the utility of an instance that's otherwise fine.).
  • Don't threaten with de-federating from an instance that your users find useful only because it allows civil non-spammy content that you disagree with. At worst make it subscriber-only or block only specific communities so that you don't give a platform to the parts that you disagree with. If in doubt let your users block remote content.

Note: making a remote instance's content invisible unless a user is subscribed to that instance's community (unlisted/subscribers-only) is not a feature that lemmy currently supports but I hope will be implemented soon.

 

Hello! I'll try to present my view on how instance moderation can be handled in the fediverse in order for small instances to be able to exist. This view tries its best to keep federation while also making it possible for a small instance with limited moderators handle things.

Please note that I've been cultivating this for a while now. It is not related to any recent events. It is also primarily applicable to Mastodon, but I'm trying to adapt it to lemmy.

Basically it goes like this: Focus on moderating content in this order. The lower the number the higher the priority.

  1. Content sent by your instance's users
  2. Content sent to your instance's users or communities by remote users.

...

  1. Content sent between remote users in remote communities

Basically as a moderator for instance A, I don't need to know right away that a user from instance B said something controversial in a community of instance C. I might want to not care about it at all.

While it's true that my users while see this content through my instance and will likely report it because it is controversial / offensive / problematic / etc... I have limited resources and need to be able to rely on the mod team of instance B and instance C to do their job first and handle that scenario.

As for the users, they should of course report content they believe violates the rules, but they should also learn to rely more often on the block button, whether it is fore remote users, remote communities and hopefully in future versions of lemmy being able to block remote instances.

If I wanted something from an automated moderation tool it would be the following:

  • Keep track of how often a remote user is reported for remote content on a remote community over time, giving them one strike for every day there's one or more of such reports.

That way, if the user collects ten strikes over time, for example, I could have a look at whether I believe or not that this user's home instances is enabling toxic behavior or, if that user ever comes to communities in my instance I'll have him flagged and will know exactly for what. The benefit here is that I can take things much slower because it's a remote user on a remote community and I don't need to act immediately.

There's some exceptions such as illegal content that could harm my instance by caching it, but overall most reports I've ever received are due to toxic behavior which my instance's users should learn to block while the remote mods do their job.

Regarding priorities 1 and 2 For content generated by my instance's users, this is where I need to be quick. Just like I want to rely on remote moderators to do their job, remote moderators will want to rely on me to do my job when it involves users of my instance.

Also, if there's remote users harassing local users or leaving toxic comments in our communities or posts, as an instance admin I will need to be quick but I will also have to rely on the moderators of a specific community.

To be honest the burden of moderating a community should be placed on the creator / moderator of that community. As an instance admin this allows me to, again, be more reactive while I know that the owners of that community are cleaning up stuff. Thus even if I receive a report, I should wait to let the community moderators handle it.

Only in this way, is it possible to keep federated with a large amount of instances as a small instance with few moderation resources.

In summary:

  1. Make sure local users behave when they're in remote communities.
  2. Make sure your local communities follow the instance's rules
  3. Let community moderators handle conflict and moderate their community as they see fit (within boundaries). Only step in if thing escalate, get out of hand or there's a larger "raid" / harassment campaign.
  4. Hold community owners and moderators accountable to moderate their own spaces.
  5. Let remote moderators and admins do their job if stuff happens on remote instances between remote users.
  6. Potentially keep track of such scenarios that were reported to you by local users, if anything to have some data in order to avoid a bad actor if they were ever to come across your instance or to determine if there's an instance that's not moderating properly.

This means that it's very important for instance admins to give remote instances and remote community moderators time to handle a situation. Especially smaller instances might take a few hours or even a couple of days to deal with a situation. Unless it's a serious life-or-death scenario such as maybe doxxing, admins and moderators should tell their users to block, report and move on, as it could and should take a bit of time to do things properly.

One aspect I didn't mention is toxic remote communities. In this case I might "remove" the community so it isn't accessible from my instance and I'm not giving it a platform. In case the whole instance is dedicated to toxic communities, then I might block the instance as a whole.

 

I've always been a giant spider person, but when my boyfriend convinced me to get a dinosaur, I thought "what the hell, it can't be that hard!'

However in the few months we've had him he's broken furniture, chased the mailman and even once attacked my neighbors giant spider.

How can I make him stop? Maybe it's the food? What can I feed him that isn't as high energy?

 

This post will explain how you can create your own community on Yiffit, regardless of whether you are a local user at this instance or not.

Background: We have restricted the unsupervised creation of communities since that could potentially lead to abuse due to the limited moderating tools that lemmy currently has.

Via request, we will still allow you to create as many communities as you want and to manage them as you see fit. We just want to avoid malicious actors.

The silver lining is that this will also allow us to create communities for remote users!

How to request your own community

The process is very simple:

  1. Make sure you have created an introductory post about yourself at !chat@yiffit.net. This is the only condition.
  2. Send a private message to @Wander@yiffit.net with the text "COMMUNITY REQUEST" in all caps and then the following fields:
  • Handle (all lowercase, no spaces, underscore allowed)
  • Display name
  • If it's a NSFW, SFW or Mixed community
  • Whether everyone or only moderators can make posts

Note: you can create communities about nearly any topic. Non-furry and personal communities are more than welcome.

3a. If you're a local user, the community will be created for you, you'll be appointed as mod and then ownership will be transferred to you.

3b. If you're a remote user, the community will be created for you and you'll be appointed as mod. @Wander@yiffit.net will have to stay as owner since we can't transfer it to you, but we will not interfere with your vision or project for the community.

Community handles will be given on a first-come, first-serve basis. Community handles need to be unique, but there's no limit on how many communities about similar topics we create as long as they're not abandoned straight away and at least some effort is put into them to get them started. Thus, for example, if 10 people want a community about protogens but each with their own style of moderation, we'll be happy to accommodate you (although we might have a fuse blow due to toaster overload :P).

Despite the restriction on unsupervised community creation, it is still our intention to allow anyone to create any community they see fit as long as it abides by the instances's overarching rules

 

We've gotten over 100 sign-ups since launch, which is a massively good start, especially considering that everyone in the lemmyverse can subscribe to our communities and participate.

Now we need to keep the ball rolling to gain more and more momentum.

Here's some ideas:

  1. Go over to !chat@yiffit.net or https://yiffit.net/c/furry@pawb.social and introduce yourself. Your yiffit account gives you access to countless local and remote communities.
  2. Post your favorite artwork, SFW or NSFW (remember to give attribution)
  3. Create a new thread about your favorite food, hobbies or projects! Sharing art is only half what this instance has to offer.
  4. Create your own "subyiffit" / community and moderate it as you see fit. Carve out your own space.
  5. Help us come up with a name for our yeen mascot.
  6. Post here from any Mastodon account by simply mentioning it's name. For example @furry@pawb.social or @chat@yiffit.net
  7. Tell your friends!

Corporations like reddit are counting on our inaction to keep control, but we have a golden opportunity to create queer, authentic furry platforms, run by furries, for furries.

Let's go!

view more: ‹ prev next ›