this post was submitted on 28 Aug 2023
1743 points (98.0% liked)

Lemmy.World Announcements

29026 readers
6 users here now

This Community is intended for posts about the Lemmy.world server by the admins.

Follow us for server news ๐Ÿ˜

Outages ๐Ÿ”ฅ

https://status.lemmy.world

For support with issues at Lemmy.world, go to the Lemmy.world Support community.

Support e-mail

Any support requests are best sent to [email protected] e-mail.

Report contact

Donations ๐Ÿ’—

If you would like to make a donation to support the cost of running this platform, please do so at the following donation URLs.

If you can, please use / switch to Ko-Fi, it has the lowest fees for us

Ko-Fi (Donate)

Bunq (Donate)

Open Collective backers and sponsors

Patreon

Join the team

founded 1 year ago
MODERATORS
 

Hello everyone,

We unfortunately have to close the !lemmyshitpost community for the time being. We have been fighting the CSAM (Child Sexual Assault Material) posts all day but there is nothing we can do because they will just post from another instance since we changed our registration policy.

We keep working on a solution, we have a few things in the works but that won't help us now.

Thank you for your understanding and apologies to our users, moderators and admins of other instances who had to deal with this.

Edit: @[email protected] the moderator of the affected community made a post apologizing for what happened. But this could not be stopped even with 10 moderators. And if it wasn't his community it would have been another one. And it is clear this could happen on any instance.

But we will not give up. We are lucky to have a very dedicated team and we can hopefully make an announcement about what's next very soon.

Edit 2: removed that bit about the moderator tools. That came out a bit harsher than how we meant it. It's been a long day and having to deal with this kind of stuff got some of us a bit salty to say the least. Remember we also had to deal with people posting scat not too long ago so this isn't the first time we felt helpless. Anyway, I hope we can announce something more positive soon.

you are viewing a single comment's thread
view the rest of the comments
[โ€“] [email protected] 36 points 1 year ago* (last edited 1 year ago) (2 children)

It's one of the few things Reddit handles the situation better by being a centralized entity with a dedicated workforce filtering out these content. It's a shame it has to be this way, but I understand why it has to be done.

[โ€“] [email protected] 19 points 1 year ago (5 children)

So, Mastodon has this same problem?

[โ€“] [email protected] 21 points 1 year ago (1 children)

Pretty much. I recently had my mastodon feed spammed with racist, homophobic, and gore-filled posts just because they would post with a list of unrelated hashtags. You could keep blocking the poster or the instance but they would pop back up from another instance or with another account. It eventually stopped but I'm sure it'll happen again. You're apparently able to filter out certain offensive terms with a filter but I think you have to manually enter the terms yourself.

[โ€“] [email protected] 13 points 1 year ago

Twitter had that problem in the beginning, people forget that. I've seen some shitty stuff on Reddit as well and reported it, it's a problem everywhere.

[โ€“] [email protected] 12 points 1 year ago

There have been issues in the larger instances with slow or unresponsive moderation, leading to occasional bursts of bot activity

[โ€“] [email protected] 8 points 1 year ago

Pretty sure it does, actually

[โ€“] [email protected] 2 points 1 year ago

I don't use it, so I can't answer that.

[โ€“] [email protected] 1 points 1 year ago

Yep. It's why I curate my feed very carefully and am very quick with the "block" button.

[โ€“] [email protected] 18 points 1 year ago (2 children)

Someone has never heard of /r/jailbait

[โ€“] [email protected] 6 points 1 year ago* (last edited 1 year ago)

That's because Reddit chose to leave it up until the media reported on it, though.

That said, it's really hard to protect against a dedicated, targeted attack. Eg, stuff like captchas can make it harder to create accounts, but think about how fast you could make accounts manually if you wanted to. You don't need thousands of accounts to cause mayhem. Even a few dozen can cause serious problems. I think a lot of the internet depends on the general good will of most users. Plus the threat of legal action if they get caught (but that basically requires depending on police and we know police aren't dependable).

One thing Reddit had that I'm not sure Lemmy does (never heard mentions of it) is the option to require all posts and comments to be approved by a mod before it's visible. That might even have just been an automod thing combined with how Reddit let admins hide and unhide comments. But even if they were to use that, it's not fair for volunteer mode to have to deal with that. It's also sooo much work. You can't just approve posts, cause attackers will use comments. And you have to approve edits or attackers will post something innocent and then edit it to be malicious. And even without an edit, they can link to an image and then change the file itself to a different one (checksums could prevent that, but it's more work and it's a constant battle against malice).

[โ€“] [email protected] 4 points 1 year ago

I mean, that's reddit prehistory at this point.