Selfhosted
A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.
Rules:
-
Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.
-
No spam posting.
-
Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.
-
Don't duplicate the full text of your blog or github here. Just post the link for folks to click.
-
Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).
-
No trolling.
Resources:
- selfh.st Newsletter and index of selfhosted software and apps
- awesome-selfhosted software
- awesome-sysadmin resources
- Self-Hosted Podcast from Jupiter Broadcasting
Any issues on the community? Report it using the report flag.
Questions? DM the mods!
view the rest of the comments
I'm torn a bit, because architecturally/conceptually the split that LVM does is the correct way: have a generic layer that can bundle multiple block devices to look like one and let any old filesystem work on top of that. It's neat, it's clean, it's unix-y.
But then I see what ZFS (and btrfs, but I don't use that personally) do while "breaking" that neat separation and it's truly impressive. Sometimes tight integration between layers has serious advantages too and neat abstraction layers don't work quite as well.
Care to elaborate about these ZFS features?
ZFS combines the features of something like LVM (i.e. spanning multiple devices, caching, redundancy, ...) with the functions of a traditional filesystem (think ext4 or similar).
Due to that combination it can tightly integrate the two systems and not treat the "block level" as an opaque layer. For example each data block in ZFS is stored with a checksum, so data corruption can be detected. If a block is stored on multiple devices (due to a mirroring setup or raid-z) then the filesystem layer will read multiple blocks when it detects such a data corruption and re-store the "correct" version to repair the damage.
First off most filesystems (unfortunately and almost surprisingly) don't do that kind of checksum for their data: when the HDD returns rubbish they tend to not detect the corruption (unless the corruption is in their metadata in which case they often fail badly via a crash).
Second: if the duplication was handled via something like LVM it couldn't automatically repair errors in a mirror setup because LVM would have no idea which of the blocks is uncorrupted (if any).
ZFS has many other useful (and some arcane) features, but that's the most important one related to its block-layer "LVM replacement".
Very interesting, thanks for the message. I might use it in my next Nas, but my workstation is staying on regular lvm, too much hassle to change probably...
ZFS is nifty and I really like it on my Homelab Server/NAS. But it is definitely a "sysadmins filesystem". I probably wouldn't suggest it to anyone just for their workstation, as the learning curve is significant (and you can lock yourself into some bad decisions).