this post was submitted on 02 Jul 2023
61 points (93.0% liked)

Selfhosted

39893 readers
419 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 1 year ago
MODERATORS
 

I hope this post is not too off topic. I thought that it would be nice to see the address of all the small self-hosted instances of Lemmy (1~5 users).

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 1 points 1 year ago (1 children)

how do you handle the sled state for pictrs with 2 nodes? I've been having some trouble with it.

[–] lemmy 2 points 1 year ago (1 children)

I have only 1 container of pictrs running(with no scaling) and are using longhorn for storage, so if the pictrs container switches node, then longhorn handles it for me.

[–] [email protected] 1 points 1 year ago (1 children)

I see, thanks. What volume(s) are you persisting that way exactly? I mean the internal path that pictrs is using.

[–] lemmy 2 points 1 year ago (1 children)

The internal path, I'm persisting is /mnt, but I also use an older version of pictrs(0.3.1). Think the newer version uses a different path.

I also needed to add the following for the pictrs container to work correctly.

  securityContext:
    runAsUser: 991
    runAsGroup: 991
    fsGroup: 991
[–] [email protected] 2 points 1 year ago