Selfhosted
A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.
Rules:
-
Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.
-
No spam posting.
-
Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.
-
Don't duplicate the full text of your blog or github here. Just post the link for folks to click.
-
Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).
-
No trolling.
Resources:
- selfh.st Newsletter and index of selfhosted software and apps
- awesome-selfhosted software
- awesome-sysadmin resources
- Self-Hosted Podcast from Jupiter Broadcasting
Any issues on the community? Report it using the report flag.
Questions? DM the mods!
view the rest of the comments
I'd check high I/O wait, specially if your all of the vms are on HDDs.
one of the solution I had for this issue was to have multiple DNS servers. solved it by buying a raspberry pi zero w and running a 2nd small instance of pihole there. I made sure that the piZeroW is plugged on a separate circuit in my home.
Good point. I just checked and streaming something to my TV causes IO delay to spike to like 70%. I'm also wondering if maybe me routing my Jellyfin (and some other things) through NGINX (also hosted on Proxmox) has something to do with it.. Maybe I need to allocate more resources to NGINX(?)
The system running Proxmox has a couple Samsung Evo 980s in it, so I don't think they would be the issue.
lemme know if you need some tshooting remotely, if schedules permit, we can do screenshares
Very nice of you to offer. I made a few changes (routing my problem Jellyfin client directly to the Jellyfin server and cutting out the NGINX hop, as well as limiting the bandwidth of that client incase the line is getting saturated).
I'll try to report back if there's any updates.
hey yeah, no stress!
just lemme know if you'd want someone to brainstorm with.
I had this issue when I used kubernetes, sata SSDs cant keep up, not sure what Evo 980 is and what it is rated for but I would suggest shutting down all container IO and do a benchmark using fio.
my current setup is using proxmox, rusts configured in raid5 on a NAS, jellyfin container.
all jf container transcoding and cache is dumped on a wd750 nvme, while all media are store on the NAS (max. BW is 150MBps)
you can monitor the IO using IOstat once you've done a benchmark.