this post was submitted on 25 Jul 2023
121 points (84.2% liked)

Fediverse

28195 readers
158 users here now

A community to talk about the Fediverse and all it's related services using ActivityPub (Mastodon, Lemmy, KBin, etc).

If you wanted to get help with moderating your own community then head over to [email protected]!

Rules

Learn more at these websites: Join The Fediverse Wiki, Fediverse.info, Wikipedia Page, The Federation Info (Stats), FediDB (Stats), Sub Rehab (Reddit Migration), Search Lemmy

founded 1 year ago
MODERATORS
 

Not the best news in this report. We need to find ways to do more.

top 50 comments
sorted by: hot top controversial new old
[–] [email protected] 49 points 1 year ago* (last edited 1 year ago) (1 children)

Why do they just mention absolute numbers, instead of comparing them to similar platforms? All they said was that there is CSAM on the Fediverse, but that's also true for centralized services and the internet as a whole. The important question is whether there is more or less CSAM on the Fediverse, no?

This makes it look very unscientific to me. The Fediverse might have a CSAM problem, but you wouldn't know it from this study.

[–] [email protected] 11 points 1 year ago* (last edited 1 year ago) (1 children)

Fediverse also makes it potentially easier to scan for this stuff. You can just connect a new server to the network and if the material is on federated servers, then you can find it (probably). While if it's some private forum or even the dark web, I assume it's a lot more difficult.

The other thing is, most regular servers defederate from suspicious stuff already. Like pretty much nobody federates with that one shota instance, and they only serve drawm stuff (AFAIK). So I don't know if you can even say servers like that are a part of the Fediverse in the first place.

[–] [email protected] 3 points 1 year ago

That's what I thought as well. If the authors of this "study" were able to simply scan for it on the Fediverse, then what's stopping law enforcement units from doing the same? They can literally get a message everytime someone posts something on a suspicious instance.

[–] [email protected] 32 points 1 year ago (6 children)

Why would someone downvote this post? We have a problem and it's in our best interest to fix that.

[–] [email protected] 54 points 1 year ago (3 children)

The report (if you can still find a working link) said that the vast majority of material that they found was drawn and animated, and hosted on one Mastodon instance out of Japan, where that shit is still legal.

Every time that little bit of truth comes up, someone reposts the broken link to the study, screaming about how it's the entire Fediverse riddled with child porn.

[–] [email protected] 18 points 1 year ago (2 children)

So basically we had a bad apple that was probably already defederated by everyone else.

[–] [email protected] 17 points 1 year ago

Moreso an apple with controversial but not strictly CSAM material based in a country where it's content is legal. Actually, not even an apple; Lemmy and the fediverse aren't an entity. It's an open standard for anyone to use; you don't see the Modern Language Association being blamed for plagiarized essays written in MLA format, or the WHATWG being blamed because illegal sites are written in HTML, so it's not a fair comparison to say that Lemmy/the fediverse are responsible for what people do with their open standard either

[–] [email protected] 7 points 1 year ago (1 children)

It's Pawoo, Pixiv's (formerly) own instance, which is infamous for this kind of content, and those are still "just drawings" (unless some artists are using illegal real-life references).

[–] [email protected] 5 points 1 year ago (1 children)

They're using Generative AI to create photo realistic renditions now, and causing everyone who finds out about it to have a moral crisis.

[–] [email protected] 6 points 1 year ago (1 children)

Well, that's a very different and way more concerning thing...

[–] [email protected] 3 points 1 year ago (1 children)

... I mean ... idk ... If the argument is that the drawn version doesn't harm kids and gives pedos an outlet, is a ai generated version any different?

[–] [email protected] 7 points 1 year ago (1 children)

imo, the dicey part of the matter is "what amount of the AI's dataset is made up of actual images of children"

[–] [email protected] 2 points 1 year ago

Shit that is a good point.

[–] [email protected] 6 points 1 year ago (1 children)

Here's a link to the report: https://stacks.stanford.edu/file/druid:vb515nd6874/20230724-fediverse-csam-report.pdf
It is from 2023-07-24, so there's a considerable chance it is not the one you were thinking about?

[–] [email protected] 14 points 1 year ago (1 children)

Since the release of Stable Diffusion 1.5, there has been a steady increase in the
prevalence of Computer-Generated CSAM (CG-CSAM) in online forums, with
increasing levels of realism.17 This content is highly prevalent on the Fediverse,
primarily on servers within Japanese jurisdiction.18 While CSAM is illegal in
Japan, its laws exclude computer-generated content as well as manga and anime.

Nope, seems to be the one. They lump the entire Fediverse together, even though most of the shit they found was in Japan.

The report notes 112 non-Japanese items found, which is a problem, but not a world shaking issue. There may be issues with federation and deletion orders, which is also an issue, but not a massive world shaking one.

Really, what the report seems to be about is the fact that moderation is hard. Bad actors will work around any moderation you put in place, so it's a constant game of whack-a-mole. The report doesn't understand this basic fact and pretends that no one is doing any moderation, and then they add in Japan.

[–] [email protected] 8 points 1 year ago* (last edited 1 year ago) (1 children)

I can't seem to find the source for the report about it right now, but there's literal child porn being posted to Instagram. We don't see this kind of alarmist reports about it because it is not something new, foreign and flashy for the general public. All internet platforms are susceptible to this kind of misuse. The question is what moderation tools and strategies are in place to deal with that. Then there's stuff like on TOR where CSAM was used as a basis to discredit the use of the whole technology then it turned out that the biggest repository was an FBI honey pot operation.

[–] [email protected] 10 points 1 year ago

Buried in this very report, they note that Instagram and Twitter have vastly more (self generated) child porn than the Fediverse. But that's deep into section 4, which is on page 8. No one is going to read that far into the report, they might get through the intro, which is all doom and gloom about decentralized content.

[–] [email protected] 3 points 1 year ago

It's 4Chan's "9000 Penises" all over again

[–] [email protected] 22 points 1 year ago* (last edited 1 year ago) (3 children)

The study doesn't compare their findings to any other platform, so we can't really tell if those numbers are good or bad. They just state the absolute numbers, without really going into to much detail about their searching process. So no, you can't draw the conclusion that the Fediverse has a CSAM problem, at least not from this study.

Of course that makes you wonder why they bothered to publish such a lackluster and alarmistic study.

load more comments (3 replies)
[–] [email protected] 13 points 1 year ago (3 children)

Because it's another "WON'T SOMEONE THINK OF THE CHILDREN" hysteria bait post.

They found 112 images of cp in the whole Fediverse. That's a very small number. We're doing pretty good.

[–] [email protected] 5 points 1 year ago (1 children)

It is not "in the whole fediverse", it is out of approximately 325,000 posts analyzed over a two day period.
And that is just for known images that matched the hash.

Quoting the entire paragraph:

Out of approximately 325,000 posts analyzed over a two day period, we detected
112 instances of known CSAM, as well as 554 instances of content identified as
sexually explicit with highest confidence by Google SafeSearch in posts that also
matched hashtags or keywords commonly used by child exploitation communities.
We also found 713 uses of the top 20 CSAM-related hashtags on the Fediverse
on posts containing media, as well as 1,217 posts containing no media (the text
content of which primarily related to off-site CSAM trading or grooming of minors).
From post metadata, we observed the presence of emerging content categories
including Computer-Generated CSAM (CG-CSAM) as well as Self-Generated CSAM
(SG-CSAM).

[–] [email protected] 6 points 1 year ago

How are the authors distinguishing between posts made by actual pedophiles and posts by law enforcement agencies known to be operating honeypots?

load more comments (2 replies)
[–] [email protected] 4 points 1 year ago

Because its literally nothing a normie would read through. And some thought lemmy had bad ui.

[–] [email protected] 4 points 1 year ago (1 children)

Because it's talking about a report without linking to any report. That's shouting into the void at best, clickbait at worst.

[–] [email protected] 4 points 1 year ago

I was able to click and access the report fine

[–] [email protected] 4 points 1 year ago

Given new commercial entrants into the Fediverse such as WordPress, Tumblr and Threads, we suggest collaboration among these parties to help bring the trust and safety benefits currently enjoyed by centralized platforms to the wider Fediverse ecosystem.

Because the solution sucks?

[–] [email protected] 26 points 1 year ago* (last edited 1 year ago) (2 children)

Another "SCREAM" for "BAN FEDIVERSE ITS DANGEROUS!!!!!!!". And then of course its tagged "CSAM" lol. You want a (small) website removed just accuse them of csam. And boom hoster and admins raided.

[–] [email protected] 26 points 1 year ago (1 children)

Better ban school too. I hear that's how kids find drug dealers.

[–] [email protected] 7 points 1 year ago* (last edited 1 year ago) (1 children)

And don't forget to ban Sugar. Because Hitler loved Sugar!

[–] [email protected] 4 points 1 year ago (1 children)

And air. I heard Stalin loved air.

[–] [email protected] 3 points 1 year ago

Gods damnit. Everything needs to be banned now!

[–] [email protected] 9 points 1 year ago (2 children)

Not at all. Sensible suggestions.

load more comments (2 replies)
[–] [email protected] 18 points 1 year ago

This isn't science - it's propaganda.

[–] [email protected] 9 points 1 year ago (1 children)

Is this the same report that was brought up where it was found out Twitter has the exact same issue?

[–] [email protected] 11 points 1 year ago

Or that Reddit has had this issue in spades.

Frankly, that the ability for the Fediverse to cut off problem servers is listed as a drawback and not an advantage is in my opinion, wrong.

[–] [email protected] 9 points 1 year ago* (last edited 1 year ago) (7 children)

What report?

Edit: https://purl.stanford.edu/vb515nd6874

Somehow the link in the post is broken. Trying to share it copies the link. Weird.

load more comments (7 replies)
[–] [email protected] 6 points 1 year ago (7 children)

basically we dont know what they found, because they just looked up hashtags, and then didnt look at the results for ethics reasons. They dont even say what hashtags they looked through.

[–] [email protected] 5 points 1 year ago (3 children)

We do know they only found, what, 112 actual images of CP? That's a very small number. I'd say that paints us in a pretty good light, relatively.

[–] [email protected] 6 points 1 year ago

112 images out of 325,000 images scanned over two days, is about 0,03% So we are doing pretty well. With more moderation tools we could continue to knock out those sigmas.

load more comments (2 replies)
load more comments (6 replies)
[–] [email protected] 3 points 1 year ago

yeahhh the free internet is a bit like a wild west. maybe ain't for children. maybe gotta keep em behind more secure digital walls while they're growing before letting em loose. but still gotta teach em media literacy and safe internet practices and such.

but lbr this isn't gonna stop the more persistent kids

[–] [email protected] 3 points 1 year ago

Abstract:

The Fediverse, a decentralized social network with interconnected spaces that are each independently managed with unique rules and cultural norms, has seen a surge in popularity. Decentralization has many potential advantages for users seeking greater choice and control over their data and social preferences, but it also poses significant challenges for online trust and safety.

In this report, Stanford Internet Observatory researchers examine issues with combating child sexual exploitation on decentralized social media with new findings and recommendations to address the prevalence of child safety issues on the Fediverse.

load more comments
view more: next ›