this post was submitted on 12 Jul 2024
871 points (98.0% liked)

me_irl

4635 readers
927 users here now

All posts need to have the same title: me_irl it is allowed to use an emoji instead of the underscore _

founded 1 year ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 41 points 3 months ago (16 children)

The WAF on my household tech is pretty high. That includes Plex.

I have in house dual/redundant DNS, and my Plex is nearly 100% 24/7/365 on old server hardware. Our living space is far enough away from the servers that the noise isn't really a problem, and I can break most of what I have installed/setup and internet continues to work because of the independent and redundant DNS. All of my homelab domains are just a stub zone in my main DNS, so everything keeps working if something dies or stops working.

[–] [email protected] 6 points 3 months ago (3 children)

I kinda feel like old server hardware is key here. I have pretty much my whole lab running on an old R730 I put a bunch of ECC RAM, disks, and a transcode GPU into and it's been essentially flawless for like 2 years. Plus it has an IPMI which I don't think I could live without now. It replaced a setup that would always give me issues which consisted of a bunch of optiplexes, and white boxes. I still hack on pi's cuz it's fun, but all the core stuff is surplus enterprise.

[–] [email protected] 1 points 3 months ago (2 children)

I recently upgraded my lab, it used to be an R710, and a pair of nodes from a c6100. Because that stuff was so old, I managed to cram all the VMs I was running onto a single FC630 node on a shiny, new (to me) Dell FX2s.

I really want to get a transcoding GPU, but passing out through to a VM has historically been infeasible, and now, it's complicated at the very least.... At least for Nvidia GPUs. I've been looking at the Intel discrete GPU lines for the task recently. I'd sure like to grab a flex 140, but looking at the prices right now, ha, that's not happening anytime soon. With the FX2s I can only install single-slot half height cards, so options are limited. Front runners right now are the Nvidia P4 and T4, and the Intel ARC A380, with a modded cooler so it's single slot. My only other option is to find some way to use the existing PCIe interfaces to attach an external GPU, but eGPU enclosures are pretty expensive too and most don't even come with a GPU.

I'm trying to stay away from thunderbolt, so if I go external, I'll probably look at either Oculink, or something similar. TB is just way too expensive IMO. I looked into it and the whole setup, a TB PCIe card, TB eGPU enclosure, and a GPU is something like 40-50% more expensive than using a different solution. I'd prefer everything just fits in the server chassis, but then I'm banging my head off of Nvidia or modding Intel ARC cards. None of these are very appealing.

So CPU transcoding for now. I store all my media in 720p AVC/AAC using MP4 as a container, so most streams are direct, and I did that very much on purpose.

[–] [email protected] 1 points 3 months ago (1 children)

Nice! That seems like a sweet little server. Direct play is for sure ideal, plus if 720p is good enough quality for you I'm sure it saves a bunch on disk space.

My set up is an A380 passed to an Ubuntu 24.04 VM on a TrueNAS CORE host. It was really simple to set up PCIe pass through, TrueNAS let's you do everything you need though the web GUI and h.264 and HEVC transcoding worked right out of the box in Jellyfin with the Jellyfin flavored FFMPEG if I recall. It also supports AV1 encoding but I haven't tried that out. It handles like a dozen 4k transcodes at once, they're capable little cards. I think ASRock makes a slot powered low profile 1 slot version.

[–] [email protected] 1 points 3 months ago

I'm familiar with the sparkle arc cards, not so much asrock. I'll check it out.

My main motivation for 720p is a combination of me not caring about 1080/4k, space, and bandwidth. I only really get 10mbps of upload where I am. It's basically impossible to get anything faster, so if one person tries to stream 4k, not only are they going to have a bad time, but also, nobody else is going to be watching anything.

If I had 4k/1080 content, most of the time the server would need to transcode it anyways for most people, which I'd have to pay for via my electricity bill, and I'd be footing the bill for more disk storage to keep it around. On top of that, live transcoding is generally not as good as a 2 pass vbr when running it through handbrake or something.

There's obviously more to it overall, but I'll leave it at that.

Plex has support for hardware transcoding, but the CPU in the server where my VM is, doesn't have a built in GPU, so I have to add one in. It's part of the reason I moved from the c6100 to an FX2s. The "s" variant of the FX2 has PCIe card slots in the back that connect to the hosts. In the case of the c6100, there was space for a PCIe card, but only one, and given the built in 2x1GbE onboard, I'd sooner use those for additional networking. The FX2s has 2x10GbE, so it's less of a concern to use the PCIe slots for graphics... Also, there's two slots per half-width blade, which is what I have, so I could add two GPUs per host.

I also want to experiment with 3D accelerated vdi, and cluster hosted gaming (similar to stadia), in house... For that I need a decent graphics card. The only one with a good amount of RAM is the Intel flex 140 and the Nvidia T4. The arc A380 is decent, but 6G of memory is limiting. The flex 140 has 12G IIRC and the T4 has 16G. It seems like a lot until you split up the GPU among a couple of VMs... On the T4 you get either 2x8G VRAM systems, or 3x 5.33G VRAM systems.... I'd rather 6G per as a minimum standard. This means that to have two GPU enabled systems with the A380, you'd basically need one card per VM. Even though they're pretty cheap cards, having 3 hosts (as is the plan) gets expensive pretty fast.

load more comments (12 replies)