Selfhosted
A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.
Rules:
-
Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.
-
No spam posting.
-
Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.
-
Don't duplicate the full text of your blog or github here. Just post the link for folks to click.
-
Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).
-
No trolling.
Resources:
- selfh.st Newsletter and index of selfhosted software and apps
- awesome-selfhosted software
- awesome-sysadmin resources
- Self-Hosted Podcast from Jupiter Broadcasting
Any issues on the community? Report it using the report flag.
Questions? DM the mods!
view the rest of the comments
Thank you for the tip. I will look into Xeon-D integrated motherboards. I will not be running very heavy loads (other than a Suricata instance for an IDS/traffic analyser - I would love suggestions which might be lighter on compute - which might be heavy). The idea for training ML models was just a remote possibility.
My apologies, I kept saying iLO/iDRAC when I meant IPMI.
Why do you suggest having separate devices for storage/compute?
My idea was to run FreeBSD on a ZFS mirror of NVME drives as the base, and run VMs/Jails on a pool of SATA SSDs. These would exist alongside HDDs but would otherwise not affect their functioning. In this scenario, how does having 2 machines make my infrastructure more reliable, other than FreeBSD not running as intended?
Have you had instances of memory corruption because you didn't use ECC? I was under the impression from r/selfhosted that this problem was blown out of proportion.
The reason I mentioned the E-key slot is because that way, I don't have to use a PCIe slot for the adapter, which I might use for something else. I have no need for 10Gbe.
Thanks!
because storage never needs to be upgraded beyond drive capacities, unless you need for a bunch of NVMe storage which requires more pcie lanes. The only reason you should have to change a board or cpu in a storage server is if it dies. If you need a new piece of hardware for its new features, it would be much easier to upgrade a different system rather than taking your storage offline to do it. whatever gpu you put in there now is going to be dated in a couple years when you may want to upgrade.
no because i dont run big storage pools on desktop hardware. you may be able to run non ecc memory for a long time and not get any data corruptions, but that doesn't mean you wont. also it's not always obvious when there's corruption especially in older data.
what are you going to do on those two x1 sockets? theyre really not good for anything other than usb and 1gbit (maybe 2.5gbit?) networking, or maybe a sound card. those m.2 adapters are more suited for minipcs that dont have any other pcie expansion options. not saying you can't or shouldn't do it, but why? especially when 10gbit options are much cheaper if you buy used.
Thanks for your comment. I'll keep this in mind :)