thumdinger

joined 2 years ago
[–] [email protected] 2 points 2 days ago* (last edited 2 days ago)

Pulling around 200W on average.

  • 100W for the server. Xeon E3-1231v3 with 8 spinning disks + HBA, couple of sata SSD’s
  • ~80W for the unifi PoE 48 Pro switch. Most of this is PoE power for half a dozen cameras, downstream switches and AP’s, and a couple of raspberry pi’s
  • ~20W for protectli vault running Opnsense
  • Total usage measured via Eaton UPS
  • Subsidised during the day with solar power (Enphase)
  • Tracked in home assistant
[–] [email protected] 2 points 2 days ago

Looks like home assistant

[–] [email protected] 2 points 1 week ago* (last edited 1 week ago)

For storage redundancy RAID 5 is not recommended, particularly as you get to high capacity drives (think >8TB). I think the rating to consider is URE (unrecoverable read error, usually 1 in 10^14 bits read).

Once a drive inevitably fails and you are forced to resilver the array to avoid data loss. During the resilver the healthy disks are running at 100%, reading every bit of data they have to complete the parity calculation and determine what data is missing. The chances of encountering a URE on another drive is a near certainty at high capacities as the total number of bits read exceeds the URE rating. As result the resilver would fail and the array would be lost.

RAID 6 as a minimum (2 drive redundancy), although a popular option now (and the layout I use) is mirrored vdevs.

Edit: Consider TrueNAS for NAS software. I have been using it for 10 years and it is absolutely rock solid. 25TB usable storage across 4x mirrored vdevs. I run it as a VM inside Proxmox with 4 logical cores on a 10 year old Xeon with 16GB RAM for the VM (I run ECC as was recommended at the time, but whether it’s still considered necessary I’m not certain).

I would also recommend getting an LSI HBA (host bus adapter) like the 9207-8i flashed to IT mode (it must not be in raid mode, let TrueNAS manage the disks directly). This simplifies passing through all the disks to a VM.

[–] [email protected] 1 points 1 week ago (1 children)

The options I’m looking at have PCIe 4 and seem to be gen 2? Epyc 7282 or 7302.

[–] [email protected] 2 points 1 week ago (3 children)

I think this is where I'm headed. Is there anything to consider with Threadripper vs Epyc? I'm seeing lots of CPU/MOBO/RAM combo's on ebay for 2nd gen Epyc's. Many posts on reddit confirming the legitimacy of particular sellers, plus paypal buy protections have me tempted.

[–] [email protected] 3 points 1 week ago

Thanks, I'll need to have a look at how the chipset link works, and how the southbridge combines incoming PCIe lanes to reduce the number of connections from 24 in my example, to the 4 available. Despite this though, and considering these devices are typically PCIe 3.0, operating at the maximum spec, they could swamp the link with 3x the data it has bandwidth for (24x3.0 is 23.64GB/s, vs 4x4.0 being 7.88GB/s).

[–] [email protected] 6 points 1 week ago (1 children)

This is what I do as well. I have a public DNS record for my internal reverse proxy IP (no need to expose my public IP and associate it with my domain). I let NPM reach out to the DNS provider to complete verification challenge using an account token, NPM can then get a valid cert from Let’s Encrypt and nothing is exposed. All inbound traffic on 80/443 remains blocked as normal.

[–] [email protected] 5 points 1 week ago

The icing on the cake for me is the empty “Neat Patch” above the switches

[–] [email protected] 2 points 1 week ago

Thanks. This is a pretty compelling option. I hadn’t looked at the entry level arc, but when it comes to encode/decode it seems all the tiers are similar. 30W is okay, and it’s not a hard limit or anything, just nice to keep bills down!

[–] [email protected] 3 points 1 week ago (3 children)

I hadn’t considered AMD, really only due to the high praise I’m seeing around the web for QuickSync, and AMD falling behind both Intel and nvidia in hwaccel. Certainly will consider if there’s not a viable option with QS anyway.

And you’re right, the south bridge provides additional PCIe connectivity (AMD and Intel), but bandwidth has to be considered. Connecting a HBA (x8), 2x m.2 SSD (x8), and 10Gb NIC (x8) over the same x4 link for something like a TrueNAS VM (ignoring other VM IO requirements), you’re going to be hitting the NIC and HBA and/or SSD (think ZFS cache/logging) at max simultaneously, saturating the link resulting in a significant bottleneck, no?

[–] [email protected] 3 points 1 week ago (2 children)

Thanks. I'll be the first to admit a lack of knowledge with respect to CPU architecture - very interesting. I think you've answered my question - I can't have QuickSync AND lanes.

Given I can't have both, I suppose the question pivots to a comparison of performance-per-watt and number of simultaneous streams of an iGPU with QuickSync vs. a discrete GPU (likely either nVidia or Intel ARC), considering a dGPU will increase power usage by 200W+ under load (27c/kWh here). Strong chance I am mistaken though, and have misunderstood QuickSync's impressive capabilities. I will keep reading.

I think the additional lanes are of greater value for future proofing. I can just lean on CPU without HWaccel. Thanks again!

 

I'm currently running a Xeon E3-1231v3. It's getting long in the tooth, supports only 32GB RAM, and has only 16 PCIe lanes. I've been butting up against the platform limitations for a couple of years now, and I'm ready to upgrade. I've been running this system for ~10yrs now.

I'm hoping to future proof the next system to also last 8-10 years (where reasonable, considering advancements in tech and improvements in efficiency), but I'm hitting a wall finding CPU candidates.

In a perfect world, I'd like an Intel with iGPU for QuickSync (HWaccel for Frigate/Immich/Jellyfin), AND I would like the 40+ PCIe lanes that the Intel Xeon Scalable CPUs offer.

With only my minimum required PCIe devices I've surpassed the 20 lanes available on desktop CPU's with an iGPU:

  • Dual m.2 for Proxmox ZFS mirror (guest storage) - in addition to boot drive (8 lanes)
  • LSI HBA (8 lanes)
  • Dual SFP+ NIC (8 lanes)

Future proofing:

High priority

  • Dedicated GPU (16 lanes)

Low priority

  • Additional dual m.2 expansion (8 lanes)
  • USB expansions for simplified device passthrough (Coral TPU, Zigbee/Zwave for Home Aassistant, etc) (4 lanes per card) - this assumes the motherboard comes with at least 4-ports
  • Coral TPU PCIe (4 lanes?)

Is there anything that fulfills both requirements? Am I being unreasonable or overthinking it? Is there a solution that adds GPU hardware acceleration to the Xeon Silver line without significantly increasing power draw?

Thanks!

[–] [email protected] 2 points 2 weeks ago* (last edited 2 weeks ago)

I use the Amplipi from Micro-Nova for whole-home audio and I love it. It’s local, open source and has a Home Assistant integration.

The main unit has 6 zones, but expansions units can be added. I think it supports up to 4 simultaneous streams. We use 2x AirPlay streams, and a turn table connected via RCA, but many other options are supported. They detail it all on their website and GitHub repo.

 

Hey all, not a lot of activity here yet, but hopefully someone can offer some advice!

I'm having major regrets following migration from Core to Scale. The migration via update file went smoothly, but immediately apparent on starting up Scale, all of my SMB shares were broken. Both Linux and Windows clients are unable to connect.

I have since tried creating new users/groups (I noticed that my normal user and group with id's 1000 had been changed to 1001 in Scale), stripping ACL's and recreating, and deleting the shares and recreating. Failure to connect on each attempt.

Is there something I'm missing relating to the migration that prevents SMB from working? I'm not well versed in the inner working, or shell command available for troubleshooting, so most of this has been attempted through the GUI.

Also, I realise I have shot myself in the foot. Like an idiot, I saw the feature flag update message when Scale first started. I clicked through and upgraded without even thinking. I realised what I had done when I restored my Core VM from the previous day - the pool was offline, with zpool import showing an unsupported feature message. So any path back to Core is off the cards I think.

Any help is appreciated...

EDIT:

I think I have found the culprit. I downloaded the debug info and had a look at the SMB config (specifically net_config.txt). For some bizarre reason, the SMB interface had bound to an old IP address (two or three home network revisions ago - not used in many years).

[GLOBAL]

interfaces = 127.0.0.1 192.168.0.200

By selecting the systems current IP address in the optional "Bind IP Addresses" field in the global SMB service settings (under advanced), I've been able to rebind it to the correct address, and I have access (tested in Linux only so far)! phew...

view more: next ›