this post was submitted on 06 Jul 2023
43 points (93.9% liked)

Selfhosted

39893 readers
423 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 1 year ago
MODERATORS
 

I figured most of you could relate to this.

I was updating my Proxmox servers from 7.4 to 8. First one went without problems. That second one though... Yea, not so much.. I THINK it's GRUB but not sure yet.

Now my Nextcloud, NAS, main reverse proxy and half my DNS went down. And no time to fix it before work. Lovely 🤕 Well I now know what I'll be doing when I get home.

Out of morbid curiosity, What are some of ya'lls self hosting horror stories.?

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 7 points 1 year ago

I've been carrying an OMV VM since Proxmox 5. Between one of the major version updates, usrmerge made a mess and forced me to reinstall the boot disk, re-hook everything up, and while not ideal, it works. Updated again recently, and my disks started to fall into read only mode. Tried the usual, rebooting into single user mode, fsck the volume, remounting, etc. and "hey look, it came back online!" only for it to go back into read only mode again. Since it was a virtual disk on a RAID6 array, and nothing else was breaking, it was really boggling my mind. It kept doing that despite still having a couple TB of free space available... or at least so I thought.

Turns out:

I had the virtual disk allocated to 19TB of my 24TB available space to work with. The qcow file lazy-write so despite it showing 19TB on disk in ls, it only used as much as the VM actually used. Usage grew to 16TB, the qcow file tried to write more data, but 16TB is the ext4 file size limit on my system. Oops.

I ended up ordering 3 more drives, expanding to 8x8TB on RAID6 w/ 48TB ish workable space, copied the data out into separate volumes, with none of them exceeding 15TB in size, then finally deleting the old "19TB" volume. Now I have over 25TB of space to grow, and new found appreciation for the 16TB limit :)