this post was submitted on 28 Jul 2023
15 points (100.0% liked)

datahoarder

6757 readers
1 users here now

Who are we?

We are digital librarians. Among us are represented the various reasons to keep data -- legal requirements, competitive requirements, uncertainty of permanence of cloud services, distaste for transmitting your data externally (e.g. government or corporate espionage), cultural and familial archivists, internet collapse preppers, and people who do it themselves so they're sure it's done right. Everyone has their reasons for curating the data they have decided to keep (either forever or For A Damn Long Time). Along the way we have sought out like-minded individuals to exchange strategies, war stories, and cautionary tales of failures.

We are one. We are legion. And we're trying really hard not to forget.

-- 5-4-3-2-1-bang from this thread

founded 4 years ago
MODERATORS
 

I want to build a truenas server with the cheapest CPU I can find that can support ECC RAM: Celeron G4900T + 64gb ECC RAM + 4x18 TB SAS drives.

I don't have experience with ZFS or with truenas (core or scale), how much important is the CPU?

Use case: hundreds of thousands of small files mostly under 1 mb, but just 2-3 concurrent users

top 10 comments
sorted by: hot top controversial new old
[–] [email protected] 3 points 1 year ago* (last edited 1 year ago)

reason why i use a C246 motherboard and other high end expensive components, but such a cheap cpu: someone will sell me used supermicro mobo+ECC ram+SAS HDDs for 500 euro, but no CPU. A Celeron G4900T costs like 15 euro, i was wondering if it could suffice, because this is already overkill for me, at the moment i have no idea on how to use the 36 TB of storage (2x redundancy)

[–] [email protected] 3 points 1 year ago (1 children)

You don't mention your performance requirements and I'm unfamiliar with that CPU. Are you trying to saturate your 1G presumably NIC? Reads or writes?

[–] [email protected] 1 points 1 year ago (2 children)

No, just thousands of small files. Windows takes around a minute to enumerate all the files in the main share via SMB

[–] [email protected] 2 points 1 year ago

@Moonrise2473
That looks more like ARC problem, it can hold a large index of the filesystem if you give it enough room in ram, avoiding the need to seek thousands of files on a spinning disk, which takes time. HDDs are fine for sequential operations, Random IO, which is your usecase, is their biggest weakness.
@eleitl

[–] [email protected] 1 points 1 year ago

You should be good, then. Probably don't need SSDs for ZIL and L2RC either. Don't forget to schedule a weekly scrub, to catch bit rot. Essential for large drives.

[–] [email protected] 1 points 1 year ago

It’s ok until you start using jails or dedup.

[–] [email protected] -1 points 1 year ago (1 children)

ZFS tends to be more RAM intensive so make certain you have, at bare minimum around 16GB. But I would push for more.

[–] [email protected] -1 points 1 year ago

@housepanther
As Lawrence said: "It's not ram intensive, it's ram efficient."
It doesn't let ram sit there unused. So you only really need 1G of ram per 1T of storage in general, outside some very rare cases. But the more ram you throw at it, the more snappy it becomes, but there are some diminishing returns. For example, 128G of ram on a 20T array won't be fully utilized most of the time.
L2Arc raises ram requirements, because you also need to store it's index there.
@Moonrise2473