this post was submitted on 04 Jul 2023
2221 points (99.0% liked)
Lemmy.World Announcements
29079 readers
254 users here now
This Community is intended for posts about the Lemmy.world server by the admins.
Follow us for server news ๐
Outages ๐ฅ
https://status.lemmy.world/
For support with issues at Lemmy.world, go to the Lemmy.world Support community.
Support e-mail
Any support requests are best sent to [email protected] e-mail.
Report contact
- DM https://lemmy.world/u/lwreport
- Email [email protected] (PGP Supported)
Donations ๐
If you would like to make a donation to support the cost of running this platform, please do so at the following donation URLs.
If you can, please use / switch to Ko-Fi, it has the lowest fees for us
Join the team
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I was personally thinking more along the lines of if we could have a load balancer whose sole job is to route users to a random set of possible instances (which can all be administered by the same person, so that you're still joining the instance "group" that you want). The load balancer would route someone the first time they land on the page and also handle logins. That's it.
I'm assuming that the servers we're talking about are single servers, because that's how things sound. I'm personally used to only developing servers that use the "many servers behind a load balancer" approach. While distributed databases can certainly make those easier, in the absence of support for that, you could always run the backends as entirely separate servers, with the load balancer just serving to pick the backend. So you'd have a lemmy1.world and so on.
Of course, for all I know, maybe this is already the architecture of a Lemmy instance. I've never checked. Even with a good architecture, scaling can be difficult. An unnoticeable performance issue in a dev environment can be a massive bottleneck when you have tens of thousands of concurrent users.
There was a post about it. They're running a number of instances of the frontend and backend containers, behind nginx which they're using as a load balancer.
Do you (or another commenter) have a link to that post?
https://lemmy.world/post/920294
Talked about in the solutions section
I'm not sure I'm following.
Wouldn't this load balancer be swapping the user's current instance [email protected] may suddenly become [email protected]?
Or more like multiple servers within the same umbrella instance? [email protected], [email protected], [email protected], [email protected].
Apologies, while I think myself fairly tech savvy, development and networking is still a bit out of reach.
This is what I was originally picturing. So that logged in users would be browsing on pretty much entirely separate instances (avoiding them having to reuse as much).
I hadn't really decided on how I best liked the idea of handling logins, since there's so many possible options. It could be that users would just have to either know their server (so you'd have to sign in as [email protected]). Or the load balancer could maintain a simple store of users/emails to instances to avoid that. Or at the cost of extra complexity (yay), you could replicate the user across all the instances but only make a single instance active for that user at a time (that's a pretty common technique with systems I work with, with servers being strongly coupled to some range of resources to maximize efficiency).
I noticed you talked about the load balancer being a person. Sounds like it'd be better if it was a bot. They just see which pool is currently the emptiest and put them there, right?
Although you seem to be suggesting live instance swapping. Which might be possible in the future. Right now appears to be tied to registration.