this post was submitted on 11 Apr 2024
58 points (98.3% liked)

/0

1560 readers
2 users here now

Meta community. Discuss about this lemmy instance or lemmy in general.

Service Uptime view

founded 1 year ago
MODERATORS
 

/c/[email protected] is the second biggest community on Lemmy.World and yet on /0, there is nothing newer than two days.

/c/[email protected] has two posts from today but based on the vote count, I think it's only showing votes from this instance

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 21 points 7 months ago (1 children)

I think the real problem is world is getting too big. They are by far the largest instance by users, content and post volume. The federation bandwidth requirements (and front end serving requirements) have got to be insane with 7200 people actively posting, with thousands more federating in. It exposes all the cracks that lemmy inevitably has in it's underlying data handling.

[–] [email protected] 24 points 7 months ago* (last edited 7 months ago) (4 children)

It's more to do with the lemmy itself as a platform handles than bandwith. Basically, there is only one channel between any two instances, and it's serial, and each step requires multiple handshakes to complete. Add in geographic distance making those handshakes take a significant part of a second to complete, and you end up with a single channel that gets flooded.

Blahaj.zone is 1.3 million activities behind on lemmy.world for example, but we're not behind on any other instance, because those channels don't hit capacity. Now, if we could use multiple channels at once to talk to lemmy.world, we wouldn't have a problem, but lemmy isn't built for that at the moment

[–] [email protected] 7 points 7 months ago (1 children)

Are you in the matrix general chat?

[–] [email protected] 6 points 7 months ago (1 children)

I'm in the lemmy.world admin back channel, but that's it as far as lemmy.world goes. @ada:chat.blahaj.zone

[–] [email protected] 6 points 7 months ago (1 children)
[–] [email protected] 5 points 7 months ago (1 children)

I left that channel when one of the lemmy devs decided to take a swipe at our instance for the way we handled transphobia on another instance

[–] [email protected] 3 points 7 months ago
[–] [email protected] 3 points 7 months ago* (last edited 7 months ago)

Lemmy.world is really too big. We just moved [email protected] to [email protected] to try help with those issues.

Hopefully other communities will do the same, but the issue doesn't seem to be very well known.

By the way, thank you for your post about centralization of communities a while back, I used it in the post to explain to the community why we were moving: https://lemmy.blahaj.zone/post/10810804?scrollToComments=true

[–] [email protected] 2 points 7 months ago* (last edited 7 months ago)

and you end up with a single channel that gets flooded.

The idea behind federation was that you don't get single instances so large that the single sync channel gets flooded. You're meant to have more smaller instances spread out that then federate with one another like a mesh- only one copy of data needs to be sent to each instance to serve additional thousands of users without further load.

That said, realistically the fediverse needs single instances with 10-30k users to really thrive, and L.W hasn't even hit 10k actives... so there does need to be some batch improvements in the backend still.

Having the ability to "merge content" from multiple instance communities would help a lot too. People congregate on L.W both from inside and outside the instance because it's big and has the most content. But if you have multiple communities (e.g, search "memes" and see how many different memes communities there are) the others with less content get ignored. Being able to merge those from a user viewing perspective, so you would just have a "memes" group that sees federated content from all the memes communities, could reduce the need for L.W to be all encompassing.

[–] [email protected] 0 points 7 months ago (1 children)

Couldnt they batch them up? Im not a technical person but this seems solvable.

[–] [email protected] 7 points 7 months ago

Yep. Batching and/or multiple parallel channels per remote instance would solve it.