this post was submitted on 30 Nov 2023
46 points (97.9% liked)

Selfhosted

39904 readers
385 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 1 year ago
MODERATORS
 

I am building a NAS in RAID 1 (Mirror) mode. Should I buy 2 of the same drive from the same manufacturer? or does it not matter so much?

top 50 comments
sorted by: hot top controversial new old
[–] [email protected] 47 points 11 months ago (2 children)

Quite the opposite. Use drives from as many different manufacturers as you can, especially when buying them at the same time. You want to avoid similar lifecycles and similar potential fabrication defects as much as possible, because those things increase the likelihood that they will fall close to each other - particularly with the stress of rebuilding the first one that failed.

[–] [email protected] 23 points 11 months ago* (last edited 11 months ago) (3 children)

To the best of my knowledge, this "drives from the same batch fail at around the same time" folk wisdom has never been demonstrated in statistical studies. But, I mean, mixing drive models is certainly not going to do any harm.

[–] [email protected] 11 points 11 months ago (1 children)

mixing drive models is certainly not going to do any harm

It may, performance-wise, but usually not enough to matter for a small self-hosting servers.

[–] [email protected] 6 points 11 months ago

I wouldn't mix 5400 rpm drives with 7200 rpm drives, but if the rpm & sizes are the same, there won't be any measurable performance loss.

[–] [email protected] 7 points 11 months ago

If everything went fine during production you're probably right. But there have definitely been batches of hard disks with production flaws which caused all drives from that batch to fail in a similar way.

[–] [email protected] 3 points 11 months ago

I know it's only what I've experienced but I've been on a 2 weeks of hell from emc drives failing at the same time because dell didn't change up serials. Had 20 raid drives all start failing within a few days of each other and all were consecutive serials.

[–] [email protected] 11 points 11 months ago (2 children)

If I had a dollar for every time rebuilding a RAID array after one failed drive caused a second drive failure in the array in less than 24 hours.... I'd probably buy groceries for a week.

[–] [email protected] 5 points 11 months ago (2 children)

When using drives from the same model and batch?

[–] [email protected] 3 points 11 months ago

Yup. Same age, same design, same failures... and array rebuilds are super intense workloads that often force a lot of random reads and run the drive at 100% load for many hours.

[–] [email protected] 2 points 11 months ago (1 children)

I've heard just in general. The resilvering process is hard on all the remaining drives for an extended period of time.

[–] [email protected] 1 points 11 months ago (1 children)

So you're saying I should be running RAIDz2 instead of RAIDz1? You're probably right. 😂

[–] [email protected] 2 points 11 months ago (1 children)

I made that switch a few years ago for that reason.

That said, as the saying goes, RAID is not a backup, it should never be the thing that stands between you having and losing all your data. RAID is effectively just one really dependable hard drive, but it's still a single point of failure.

[–] [email protected] 1 points 11 months ago* (last edited 11 months ago) (1 children)

So you're saying I should be running JBOD with backups instead of RAIDz1? You're probably right. 🤭

[–] [email protected] 1 points 11 months ago

As long as you're ok with it being way less dependable, and having to rebuild it from scratch more often 😉.

[–] [email protected] 1 points 11 months ago

I don't know if you're talking about the sample of cases you've personally witnessed, or the population of all NASes in the world. If the former, that sounds significant. If the latter, it sounds like it's probably not something to worry about.

[–] [email protected] 19 points 11 months ago

You can use different manufacturers, just make sure they are the SAME size and speed. You can also get the same ones from the same vendor, just from different online shops to try and offset getting a bad batch.

[–] [email protected] 10 points 11 months ago (2 children)

I always thought you're supposed to buy similar drives so the performance is better for some reason (I guess the same logic as when picking RAM?) but this thread is changing my mind, I guess it doesn't matter after all👀

[–] [email protected] 8 points 11 months ago (1 children)

I heard the reverse, so they don't fail at the same time.

[–] [email protected] 2 points 11 months ago (1 children)

that's also what we did in the early 2000s when building servers. today i don't think it realy matters. i haven't had a failed drive for about 10 years and only needed to swap them out because of the capacity...

[–] [email protected] 2 points 11 months ago

I actually thought about that quite a bit, back in the day hard drives were made of sugar-glass. Remember the desk-star? Hrm, the death star. Do anything? It breaks. Do nothing? 15% fail rate anyways (or so I remember).

Today I have a 3TB + 2TB (one backs up mostly the other) drives in my NAS (WD, black maybeee) and I think they are 10y plus ... I'm not using it as a real backup but I still think I should switch out one. But then again, the Synology is so old too...

I've heard about that newer Linux file system, "M" or "L" something where you just add drives and it sorts stuff out itself, maybe I should check that out...

[–] [email protected] 6 points 11 months ago (1 children)

ram matters because the CPU will use the worse speeds and worse timings of all the sticks, drive reads and rights are buffered so it doesn't really matter

[–] [email protected] 1 points 11 months ago

Just make sure your RAM has the same timings. Its not a big deal if you have two sticks of each brand

[–] [email protected] 9 points 11 months ago* (last edited 11 months ago) (1 children)

You absolutely can. Of course you'll only be able to use as much capacity as the smallest disk. Sometime ago I was running a secondary mirror with one 8TB disk and 3 disks pretending to be the other 8TB disk. They were 4TB, 3TB and a 1TB - trivial with LVM. Worked without a hitch for a few years till I replaced the three gnomes in a trench coat with another 8TB disk. Obviously that's suboptimal but it works fine under certain loads.

[–] [email protected] 5 points 11 months ago (1 children)
[–] [email protected] 5 points 11 months ago* (last edited 11 months ago) (2 children)

Gotta treat for ya:

The three gnomes in a trench coat are the three from the left.

[–] [email protected] 3 points 11 months ago (1 children)

pic from a google datacenter?

[–] [email protected] 1 points 11 months ago

Same principle.

[–] [email protected] 2 points 11 months ago (1 children)

Bonus:

This is how you fix intermittent disconnects under heavy load.

[–] [email protected] 3 points 11 months ago

Speed holes!

[–] [email protected] 7 points 11 months ago* (last edited 11 months ago) (1 children)

Hardware or software (BTFRS, ZFS etc...) RAID?

[–] [email protected] 5 points 11 months ago (1 children)
[–] [email protected] 7 points 11 months ago (1 children)

It probably doesn't matter in most cases, especially software RAID. I've had proprietary storage system vendors recommending being very careful about identical disks but that could just be salesman crap.

[–] [email protected] 2 points 11 months ago* (last edited 11 months ago) (1 children)

^this. but I'll go even further, do you @[email protected] really need RAID? How much date are you planning to write every day?

In some case, like a typical home users with a few writes per day or even week simply having a second disk that is updated every day with rsync may be a better choice. Consider that if you're two mechanical disks spinning 24h7 they'll most likely fail at the same time (or during a RAID rebuild) and you'll end up loosing all your data. Simply having one active disk (shared on the network and spinning) and the other spun down and only turned on once a day with a cron rsync job mean your second disk will last a LOT longer and you'll be safer.

[–] [email protected] 1 points 11 months ago (1 children)

Right up until that job to turn the other drive and run the backup stops and then you don’t realize it until 17 months later.

Either way, RAID ain’t a backup, but it makes losing a drive easier.

load more comments (1 replies)
[–] [email protected] 4 points 11 months ago

As long as they're mostly the same. For example on many controllers no mixing SSDs with HDDs.

[–] [email protected] 4 points 11 months ago* (last edited 11 months ago) (1 children)

If you haven't looked into it, and if you already have the disks of varying capacity, check out JBOD. You will have to configure a system for backups however as you wont have parity like raid1

[–] [email protected] 16 points 11 months ago (1 children)
[–] [email protected] 7 points 11 months ago* (last edited 11 months ago) (2 children)

I'm aware, but raid 1 is mirroring which is redundancy, a jbod offers no redundancy so a backup would be even more crucial to protecting from data loss. Also i never said raid is a backup.

[–] [email protected] 4 points 11 months ago* (last edited 11 months ago) (2 children)

JBOD via mergerfs and snapraid on top for parity is a possible solution.

[–] [email protected] 2 points 11 months ago

That's a solution i never knew existed, that's cool as hell.

[–] [email protected] 2 points 11 months ago (1 children)

Any performance hit for that config, I've never heard of that setup before.

[–] [email protected] 0 points 11 months ago

Don't know myself as I have no use case for that setup, but it is a well known setup since several years. If teh performance was bad it wouldn't be recommended as an alternative as often.

[–] [email protected] 1 points 11 months ago

Can't you just format jbod with zfs or some other raid solution? I'm sure it depends on hardware but it shouldn't be rocket science

[–] [email protected] 2 points 11 months ago (1 children)

I usually find the cheapest drives and buy multiple of those, but you should be able to assemble a RAID out of different disks, though you'll be limited to the space of the smallest one in the mirror set.

Also make sure that your RAID systems supports this.

[–] [email protected] 1 points 11 months ago (1 children)

Okay. Where do you buy the disks?

[–] [email protected] 1 points 11 months ago (1 children)

Ebay. If you're outside the US, you'll probably be better off with a more local site.

[–] [email protected] 1 points 11 months ago

ebay is very international, and is also by far the greatest site for second-hand stuff in most European countries. I normally buy my used drives there.

[–] [email protected] 2 points 11 months ago* (last edited 11 months ago)

Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I've seen in this thread:

Fewer Letters More Letters
NAS Network-Attached Storage
RAID Redundant Array of Independent Disks for mass storage
SSD Solid State Drive mass storage

3 acronyms in this thread; the most compressed thread commented on today has 15 acronyms.

[Thread #328 for this sub, first seen 2nd Dec 2023, 20:35] [FAQ] [Full list] [Contact] [Source code]

[–] [email protected] 1 points 11 months ago

I would strongly recommend that you get the same drive. It doesn't make any sense not too