this post was submitted on 05 Jul 2023
43 points (68.7% liked)

Fediverse

28249 readers
150 users here now

A community to talk about the Fediverse and all it's related services using ActivityPub (Mastodon, Lemmy, KBin, etc).

If you wanted to get help with moderating your own community then head over to [email protected]!

Rules

Learn more at these websites: Join The Fediverse Wiki, Fediverse.info, Wikipedia Page, The Federation Info (Stats), FediDB (Stats), Sub Rehab (Reddit Migration), Search Lemmy

founded 1 year ago
MODERATORS
 

The fediverse is discussing if we should defederate from Meta's new Threads app. Here's why I probably won't (for now).

(Federation between plume and my lemmy instance doesn't work correctly at the moment, otherwise I would have made this a proper crosspost)

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 9 points 1 year ago (2 children)

defederating means that people who want to connect with someone on the platform are forced to install it. fuck that. not defederating gives people an alternative and shows them using the fediverse means they don't miss out on anything regardless of platform.

if I want to access threads content and I can do it using my existing fed account without installing their app and giving them access to my heartrate, microphone and bowel moton stats then frankly that's a win for us.

[–] [email protected] 9 points 1 year ago (1 children)

Here's my problem/concern have you read their privacy policy? I want no part of that, would being federated with them mean that they get to siphon up all of my data too? If so I don't think the defederating goes far enough...

[–] [email protected] -1 points 1 year ago

They can siphon your data no matter what you do. As I've said in other comments, everything on the internet has been crawled and scraped for literal decades. This post is already indexed by a bunch of different search engines and most likely by some other scrapers that harvest our data for AI or ad profiles. And you can do nothing about it without hurting your legitimate audience. Nothing at all. There's robots.txt as a mechanism to tell a crawler what it should or shouldn't index but that's just asking nicely (mostly to prevent search engines from indexing pages that don't contain actual content). You could in theory block certain IP ranges or user agents but those change faster than you can identify them. This dilemma is the whole reason why Twitter implemented rate limiting. They wanted to protect their stuff from scrapers. See where it got them.

Most important rule of the internet: if you don't want something archived forever, don't post it!

[–] [email protected] 3 points 1 year ago

Maybe a separate fediverse instance defederated from the rest of them, and the rest of them defederated from Facebook, would be a better way to go about it, if we really must connect to them. Cut them off from the main fediverse, but still interact outside their platform.