this post was submitted on 02 Jul 2023
11 points (100.0% liked)

Beehaw Support

2796 readers
6 users here now

Support and meta community for Beehaw. Ask your questions about the community, technical issues, and other such things here.

A brief FAQ for lurkers and new users can be found here.

Our September 2024 financial update is here.

For a refresher on our philosophy, see also What is Beehaw?, The spirit of the rules, and Beehaw is a Community


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.


if you can see this, it's up  

founded 2 years ago
MODERATORS
 

Hey all,

Moderation philosophy posts started out as an exercise by myself to put down some of my thoughts on running communities that I'd learned over the years. As they continued I started to more heavily involve the other admins in the writing and brainstorming. This most recent post involved a lot of moderator voices as well, which is super exciting! This is a community, and we want the voices at all levels to represent the community and how it's run.

This is probably the first of several posts on moderation philosophy, how we make decisions, and an exercise to bring additional transparency to how we operate.

top 50 comments
sorted by: hot top controversial new old
[–] [email protected] 1 points 1 year ago* (last edited 1 year ago) (1 children)

What about misinformation?

Without downvotes it will slowly bubble up to the top because the only barrier is finding enough people gullable or ignorant (precisely, not demeaning) enough to believe it. Or if it's "pop culture misinformation", it rises to the top by virtue of it being popular misinformation.

Both of those are not ideal for quality conten, or fact based discussion and debate when vote counts exist. As more often than not more votes = more true to a layman.

We've seen this on any other platform that have "the only direction is up" mechanics, because the only direction is up.

Another risk is promoting misinformed communities, who find comfort in each other because their shared, incorrect, opinions of what should be fact based truths find common ground. I don't think those are the kinds of communities beehaw wants. Thankfully community creation is heavily managed, which may mitigation or remove such risks entirely.

What I'm getting at is what will the stance be here? If beehaw starts fostering anti-intellectualism, will that be allowed to grow and fester? It's an insidious kind of toxicity that appears benign, till it's not.


To be clear I'm not saying these things exist or will exist on beehaw in a significant capacity. I am stating a theoretical based on the truth that there is always a subset of your population that are misinformed and will believe and spread misinformation, and some of that subset will defend those views vehemently and illogically.

I would hate to see that grow in a place that appears to have all the quality characteristics I have been looking for in a community.

The lowest common denominator of social media will always push to normalize all other forms and communities. It's like a social osmosis. Most communities on places like Reddit failed to combat and avoid such osmosis. Will beehaw avoid such osmosis over time?

[–] [email protected] 2 points 1 year ago (1 children)

Most misinformation is poorly veiled hate speech and as of such it would be removed. Down votes don't change how visible it is, or how much it's spread. You deal with misinformation by removing it and banning repeat offenders/spreaders.

[–] [email protected] 2 points 1 year ago (2 children)

I would argue that only a subset of misinformation of veiled hate speech. The rest, and majority, are misinformed individuals repeating/regurgitating their inherited misinformation.

There is definitely some hate speech veiled as misinformation, I'm not arguing against that. My argument is that's not the majority. There are severity scales of misinformation, with hate speech being near the top, and mundane conversational, every day, transient factual incorrectness being near the bottom.

There exists between those two a range of unacceptable misinformation that should be considered.

A consequence of not considering or recognizing it is a lack of respect for the problem. Which leads to the problem existing unopposed.

I don't have a solution here since this is a broad & sticky problem and moderating misinformation is an incredibly difficult thing to do. Identifying and categorizing the levels you care about and the potential methods to mitigate it (whether you can or can't employ those is another problem) should, in my opinion, be on the radar.

[–] [email protected] 1 points 1 year ago (1 children)

If you're volunteering to take it on, feel free to put together a plan. Until then you'll have to trust that we're trying to moderate within scope of the tools we have and the size of our platform, but we're still human and don't catch everything. Please report any misinformation you see.

[–] [email protected] 1 points 1 year ago* (last edited 1 year ago) (1 children)

Maybe my edit was too late! I did not communicate my objective clearly and edited my comment to reflect that.


I'm not proposing you solve misinformation, but rather that you recognize it as more than you stated, and respect the problem. That's the first step.

This is not something I can do, it is only something that admins can do in synchrony as a first step. I am doing my part in trying to convince you of that.

Only after that has been achieved can solutions be theorized/probed. Which is something I would happily be part of, and do foot work towards (Though I'm sure there are experts in the community, it's a matter of surfacing them). That's a long term project, which takes a considerable amount of research and time, doing it without first gaining traction on the problem space would be a fools errand.

At the risk of sounding abrasive (I intend no disrespect, just not sure how else to ask this atm), is that understood/clear?


Edit: Want to note that I am actually impressed by the level of engagement community founders have had. It's appreciated.

[–] [email protected] 2 points 1 year ago

Yes it's one of many problems with modern social media, no I don't have time right now to elaborate a plan on how to tackle it. Something on this subject will likely come much further in the future but right now I'm focused mostly on creating the docs necessary for people to understand our ethos more when I'm not busy living my life.

[–] [email protected] 1 points 1 year ago

An excellent example of very sneaky misinformation was an article in the Guardian the other day, which kept talking about 700,000 immigrants. Since 350,000 of those are foreign students, that is a blatant lie. Foreign students aren't immigrants.

[–] [email protected] 1 points 1 year ago* (last edited 1 year ago) (2 children)

Question:

“It’s ok to punch a Nazi” or “It’s ok to execute a pedo” content acceptable? and tangentially related Publishing “Mugshots of criminals” fetishism posts, in the moderation philosophy here?

My personal ethos of moderation is to recognize in written policy that we have these biases to have “they/them” which can backdoor exceptions content moderation standards. The backdoor is that if someone is sufficiently and clearly “bad” for the majority of the community then it becomes ok to wish harm on or dehumanize someone. In my opinion shouldn’t entertain these sorts of post because of the harm/damage done if the mob is wrong, or harm to ourselves by indulging in this sort of pornography of moral certainty. Because as long as a broader culture finds certain categories of people are ok to dehumanize, then there’s no (real) objective check upon what is acceptable based on the desire of that majority, even in a community like beehaw.org.

A tangible legal example which I think provides an example of my personal philosophy is how Human dignity is enshrined in the first article of the German Basic Law – which is the German Constitution. Article 1 reads:

Human dignity shall be inviolable. To respect and protect it shall be the duty of all state authority.

The German people therefore acknowledge inviolable and inalienable human rights as the basis of every community, of peace and of justice in the world.

The following basic rights shall bind the legislature, the executive and the judiciary as directly applicable law.

My two cents here is that if a social media policy is to succeed, it needs something akin to this in it’s “constitution”, because to not have it opens too much moral relativism by bad-faith actors unrestrained and unconcerned by cultural norms to test and push the limits of what they can get away with by dehumanizing their enemies off-platform. ( IE: imagine Pizzagate, and it’s ultimate effect on Beehaw if it’s premise was accepted by the broader community. )

I saw a very popular post on Beehaw yesterday that clearly fit this pattern, and it seems like content designed to test the relative limits of the moderation policy of philosophy of places like Beehaw.>___--

[–] [email protected] 0 points 1 year ago* (last edited 1 year ago) (17 children)

Ideally, we don't want people dehumanizing others, ever. Realistically, if someone is intolerant to you, we're not going to tone police you for responding in kind. There's nuance in there we touched on a little with this post, but it's hard to itemize every possible human behavior.

If you see anything on Beehaw that makes you think twice about whether it should be up, please report it.

load more comments (17 replies)
[–] [email protected] 1 points 1 year ago* (last edited 1 year ago)

Considering that one of those is about executing someone with an internationally recognized mental illness that is out of their control. For a thought crime.

Probably not.


Objectively, no "sympathizing" happening here, factual discussion can't happen when one party assumes the other is arguing in bad faith and uses that as a premeditated weapon to push the argument in their direction.


You make a good example as to why such content should be discouraged. Because most people will be ignorant of the real-world details, and instead follow the crowd on social media opinions and misinformation. Thus, leading to nonsensical statement such as that, where the thing they think they are talking about is entirely different than the actual thing they are talking about.

[–] [email protected] 1 points 1 year ago* (last edited 1 year ago) (3 children)

A major problem I encountered on another site was pedantry.

Often, people would make a nuisance of themselves by being deliberately obtuse and fixating on minor details, while not explicitly breaking the site’s rules. Though not overtly hateful or bigoted, pedantic comments could be remarkably exhausting and annoying. It could seem like someone was trolling, or trying to bait you into an argument, while skirting the rules to stay out of trouble themselves.

How do you moderate posts like that? Should they be reported?

[–] [email protected] 2 points 1 year ago

Being a jerk is definitely not nice behavior. Most pedantic people are prone to escalation - they'll misinterpret what you say, assume ill intent, and fire back insults in your direction. This kind of stuff is simply not tolerated. On a more nuanced level, if they're baiting you or even just trying to prove their point and ignore yours, there's a level of bad faith going on. If they truly wanted to have a conversation or understand your viewpoint, it's usually very clear.

Of course, this can get tricky when discussing real world issues with real world consequences but even then, think to a measured debate or discussion on a tricky subject and how the people involved treat each other- humanity and respect is easy to recognize. Think of the nicest person you know, and how they'd talk about the same subject. We can't hold everyone to that standard, but we can try to hold ourselves to that standard and disengage when we find ourselves failing it.

Be sure to report any and everything you see that gives you pause which hasn't been actioned or where a moderator hasn't stepped in. The more eyes we can get on a conversation the better we can tune into whether it's how we're personally viewing it versus how others do.

load more comments (2 replies)
[–] [email protected] 1 points 1 year ago* (last edited 1 year ago) (8 children)

Question:

What's the stand on discussing points of view on charged subjects?

For example, I got banned from Reddit for discussing the possible thought process of someone who might be attracted to minors. Reason for the ban: "sexualization of minors"... even though the content policy refers to the act itself, not to its discussion.

Is it allowed in here to discuss negative or controversial points of view expressed, or actions taken, by third parties? Or does it taint the whole discussion? Are there some particular "taboo" themes that would do that, while others might not? Would such discussions be allowed with a disclaimer of non-support, or get banned anyway?

I sometimes like to reflect on, and discuss, some themes that I understand some might find uncomfortable or even revolting. I also understand that there might be themes not allowed in the server's jurisdiction.

If this was the case, then I think a clear list of "taboo themes" could be useful to everyone, even if most of the moderation was focused on applying a more flexible set of rules.

load more comments (8 replies)
[–] [email protected] 1 points 1 year ago

Great read, thank you so much for sharing these, as they help build confidence for users about whether this right instance for them. Personally, beehaw.org has quickly become one of my favorite online spaces to inhabit for a long time (as you can determine by my average of 10 comments per day since joining). I love how directly your philosophy of the distributed governance of the Fediverse aligns with my own, and it feels like there hasn't been anywhere else I've explored in the Fediverse where I've seen this kind of deep shared understanding about that the Fediverse is not a pooled cluster of compute resources, but instead a loosely associated grouping of self-governing online gathering places.

Keep being great. I have high confidence in this instance

[–] [email protected] 1 points 1 year ago

I've seen a couple of really ugly comments recently, where a mod had replied, and I had to click on the person (wanting to block them) to realize they had been banned. I really hope a future Lemmy update shows very clearly when that happens, because right now it just looks like we're leaving the comment up. LEaving the comment up but showing the user as banned would be a relatively okay middle ground, I think.

[–] [email protected] 1 points 1 year ago* (last edited 1 year ago) (2 children)

I already see Beehaw as a sanitized space, to be honest. It was the first instance I had signed up for, but I switched almost immediately due to the lack of content and constant defense of censorship. I can sympathize with people who may want a safe space of sorts, but a safe space is just an echo chamber, the same way that the right has created communities where no one can challenge their deranged views.

90% of posts I've seen in Beehaw have devolved into arguments of equity where everyone must take in every advantage or disadvantage that every marginalized group has ever experienced and factor that into their position, or they're guilty of posting from a "white" point of view, or else disenfranchising every group of minorities. Not to mention that thread about Affirmative Action, in which the comments seemed to espouse a purely Black point of view, not taking into account how it may have a positive effect on Asian admissions, and completely ignoring the discussion of how admissions should be merit-based no matter what (even if that means all of our ivy-league colleges are filled with Asian students, who historically place a much higher importance on education than the rest of the world).

I don't have high hopes for any sort of meaningful discussion happening here.

[–] [email protected] 2 points 1 year ago

Feel free to point those point of views that you feel are missing. Though, if you don't have hope for meaningful discussion, consider simply leaving.

load more comments (1 replies)
load more comments
view more: next ›