It's A Digital Disease!

11 readers
1 users here now

This is a sub that aims at bringing data hoarders together to share their passion with like minded people.

founded 2 years ago
MODERATORS
651
1
HDD Noise (zerobytes.monster)
submitted 3 weeks ago by [email protected] to c/[email protected]
 
 
The original post: /r/datahoarder by /u/Neptune1987 on 2024-12-18 07:57:42.

How do you deal with HDD noise in home lab ? Exist some kind of noise reduction box that you could buy?

I just bought 2 X WD Elements 18TB USB HDD and the noise Is terrible mostly for the high frequency of it.

Actually all my home lab is under my working desk and all the other components never give me this trouble (like 4 HP mini pc, small portable usb disk, totally zero problem).

Any idea ?

652
 
 
The original post: /r/datahoarder by /u/MartinPaulEve on 2024-12-18 07:34:35.

I have about 100TB of data that are currently on a set of Synology NAD boxes in SHR configuration.

What's the best way to create a backup of these data? Tape drive? Amazon Deep Glacier (very pricey recovery)?

653
 
 
The original post: /r/datahoarder by /u/fixeditgood on 2024-12-18 06:55:18.

Hi Everyone,

Hoping I might be able to get some insight/advice from fellow datahoarders... First and foremost *cough* ... I'll admit I decided to jump into the idea of a LTO offline backup solution without really doing my homework on things.... And assumed I could use something like veeam to run my autoloader, until I discovered the restrictions requiring a license. I am exploring Amanda, YATM, and Bacula butttt at this stage I would rather just use something small and compact that isn't paid/special/involved to setup. All I really want to do is backup all files to N number of tapes and put it in storage, not looking to be able to update it, or use a complicated software /DB setup to restore it. The use case would be start at tape1, and go until there are no more tapes restoring as many of the files as possible... if there are losses, then so be it, continue on with whatever we can get/rescue.

I am planning to use a script to run:

  • TAR --multivolume
  • mbuffer -A to run a script, mtx to change tapes, and stop after N for magazine reload... (otherwise we might eventually start on tape 1 and overwrite it thinking its just the next tape
  • Ideally use maminfo or lto-cm to read the SN for each tape and log that for records (haven't got it working with lto6 yet)

I have been experimenting and reading, but from digging here a bit, I see an issue which I want to confirm can be worked around with an acceptable loss of data say due to a bad/lost tape span volume depending on the situation.

Looking for clarification on TAR experiences with lto tape ( if it matters ) from

https://www.reddit.com/r/DataHoarder/comments/16d5up3/tape_archiving_for_the_masses_new_app_i_need_your/

Regarding "I've had to deal with these (LTO4 multitape tarballs) and it only takes one bad read to lose the entire dataset ". I don't have all the context here, and forgive me I am not arguing, but trying to avoid falling into the same pitfall experience encountered here. Sure I'd expect to lose everything in the damaged/missing span, but the operation should be able to continue on without those files ?

Sorry not the best at explaining things .... Anyways with that long winded yammering outta the way, my question is regarding this: GNUTar man page "9.6 Using Multiple Tapes" states "Each volume is itself a valid GNU tar archive, so it can be read without any special options."

So lets say in a hypothetical scenario, I lose a tape, just one, and I accept the loss of

  • data that spanned onto that tape (truncated)
  • loss of the data on the missing tape
  • and the data which spanned onto the following tape.

I did some testing, and created a tar archive of a buncha of files just big enough to hit the span to the next volume. With the missing tape/span can one expect to continue successfully manually as in my test below from tape ?

note in the test below, I deleted spantest.tar-4 which should result in some damaged/missing files during extraction....

When tar bombs, we will kick off tar again and specify to start at spantest.tar-5, and try to continue...

eg : 
$ tar -xvf spantest.tar -F tarvol.sh
testfilea
testfileb
Preparing volume 2 of spantest.tar
testfilec
Preparing volume 3 of spantest.tar-2
testfiled
Preparing volume 4 of spantest.tar-3
tar: ‘tarvol.sh’ command failed
tar: Error is not recoverable: exiting now
$ ls
spantest.tar-2  spantest.tar-5  spantest.tar-7  spantest.tar-9  testfilea  testfilec
spantest.tar  spantest.tar-3  spantest.tar-6  spantest.tar-8  tarvol.sh*      testfileb  testfiled

( spantest.tar-4 is no longer of this place )

$ tar -xvf spantest.tar-5 -M
./GNUFileParts/testfilee.5
testfilef
Prepare volume #2 for ‘spantest.tar-5’ and hit return: spantest.tar-6
Invalid input. Type ? for help.
Prepare volume #2 for ‘spantest.tar-5’ and hit return: n spantest.tar-6
testfileg
Prepare volume #3 for ‘spantest.tar-6’ and hit return: n spantest.tar-7
testfileh
Prepare volume #4 for ‘spantest.tar-7’ and hit return: n spantest.tar-8
testfilei
Prepare volume #5 for ‘spantest.tar-8’ and hit return: n spantest.tar-9
$
$ ls -al testfi*
-rw-rw-rw- 1 root root 1045976716 Dec 16 01:08 testfilea
-rw-rw-rw- 1 root root 1045455488 Dec 16 01:08 testfileb
-rw-rw-rw- 1 root root 1046259215 Dec 16 01:08 testfilec
-rw------- 1 root root   83533824 Dec 16 01:11 testfiled   <- truncated/damaged
                                                           <- e is missing 
-rw-rw-rw- 1 root root 1045657783 Dec 16 01:08 testfilef
-rw-rw-rw- 1 root root 1045832725 Dec 16 01:08 testfileg
-rw-rw-rw- 1 root root 1045694961 Dec 16 01:08 testfileh
-rw-rw-rw- 1 root root 1045856641 Dec 16 01:08 testfilei

So this seems to work? Would things behave differently in some situation reading from a tape where this would not work out ?

At the moment, in theory it seems possible one could painfully continue manually by skipping the missing, and supplying the next archive/tape.

654
 
 
The original post: /r/datahoarder by /u/pugglewugglez on 2024-12-18 03:12:52.

I have called the main Samsung 1-800 number, the number for Samsung Semiconductor, and tried every option and there is literally no human to talk to about any enterprise/datacenter SSD products. The main1-800 number just says they only service consumer SSDs and refer you to Samsung Semiconductor which has a menu that sends you back to the consumer line as the only option. Furthermore, there is no information on Samsung Semiconductor's website about how to check on warranty/submit a warranty claim on enterprise/datacenter SSDs... only consumer SSDs. Am I missing something?

655
 
 
The original post: /r/datahoarder by /u/FindKetamine on 2024-12-18 02:24:21.

Used only hotmail and yahoo email in the 90s-2000s. Then went on to gmail.

I’ve tried accessing my old email accounts, but it never works. Hotmail has instructions for reentry, but I keep getting rejected.

I thought I had the right password. But either way, if you’ve had luck getting emails from 20 years ago on hotmail, please share your secret!

656
 
 
The original post: /r/datahoarder by /u/PigsCanFly2day on 2024-12-18 01:53:53.

Hi, there's an application we use called Connecteam. The chat feature is available within the web browser or through the mobile app. When I load the chat & right click the page & save as .mhtml file, the resulting file only shows the chat names, but not the chat itself.

I can scroll through & highlight the text & copy it, but when I paste it into Word, it only pastes a few (less than 10) messages, even though way more than that was copied. There are thousands of messages, so it's not feasible to just go a few at a time.

Any advice? Something that will allow me to scroll through to load it all & then have everything saved? It'd be very much appreciated.

Thank you in advance.

657
 
 
The original post: /r/datahoarder by /u/No_Cloud7556 on 2024-12-17 23:04:32.

Does anyone know how to download from this site, I'm trying to find a way, thanks in advance

658
 
 
The original post: /r/datahoarder by /u/anachronismMining on 2024-12-17 18:01:16.

TLDR: Not asking anything; warning story to avoid sabrent. Have old drive dock, performed firmware upgrade, couldn't read data written with old firmware, sabrent refused to provide original firmware, found used duplicate dock from similar mfg time, read and backed up data, sabrent refused reimbursement.

I have an old EC-DFFN purchased in 2022, been using it to read and write backups and archives. Purchased EC-HD2B recently, couldn't read the data written on the backup, 500GB of data on a 14TB drive. I was confused but too busy moving and dealing with family illness to think anything about it. Months later I realize the drive is readable, only in the old EC-DFFN dock. I started to look things up and saw there was an update because of issues with large drive support. https://downloads.sabrent.com/product/ec-dflt-firmware-update/, and other firmware on their website and off. In a sleep deprived state I start the update, 2 seconds after starting I realize that'll probably means I won't be able to read the old data. "oh no. Tivo pieces. And I went to bed." Lo and behold, gpt protective partition and largely unreadable data from either linux or windows. Might have been possible to use a recovery utility, could kind of see some of the files with a recovery utility, was slow as hell. But I figured a firmware downgrade was the sensical option.

I call and email Sabrent. I'm mildly ornery, brusk, have poor beside manner, and obviously irritated because I can't find original firmware on their site.

My name is ********* Thank you for contacting Sabrent Tech support.I'm happy to assist you, in this case, unfortunately the original version of the Firmware is not available for sharing, we highly recommend that before applying any firmware update you contact us to in order to determine if the Firmware update is necessary. To have a better perspective of the current situation with the docking please install a drive and if you are con windows go to disk management take a screenshot of how the system recognize the drive and share it with me.

Next:

Thank you for your response, as mentioned before, we highly recommend that, is just a recommendation, a good practice in order to provide an accurate diagnosis if the docking station will need or not a Firmware update, I understand your position, I just want to clarify that what I'm doing is provide a recommendation and a good practice for the Sabrent devices, I totally understand if you decide to apply the updates and contact us later that is no issue we will happy to assist you.That been said, the second Firmware update that you download and apply is the currently the only Firmware update that we have available for the Jmicron chipset that address storage capacity issues, I will double check with development team if there is a different Firmware update that we can try, as soon as I have a response I will let you know.just for you information the First Firmware update that you download and apply just address the issue when the docking station is not recognize by Acronis true image for Sabrent.

Next

I just received a response from my colleagues on development team, they informed me that unfortunately there is no alternative Firmware update that we can try with the docking station and I even asked if an exception is possible for the Original Firmware update unfortunately, the request was denied.the best shot would be with the second docking station that you order, please keep me posted if that unit is able to read the 14TB drive.

I tried two recent firmware from sabrent, and others of dubious origin after their refusal, no dice.

They refuse to provide old firmware. Tried two recent firmware from sabrent, and others of dubious origin. I go to chotchkies r'us where I find a delightful vintage EC-DFFN infused with period accurate cheap cigarette smoke, for more than what I paid new. But it's of the right vintage, 2021. Who am I to complain? I read the backup and backup the backup data. I message them letting them know that in fact old firmware does work for my purposes of restoring data and to poke the obviously tepid waters asking for reimbursement expecting nothing.

I’m glad to hear that the docking station you purchased is able to read your drive. In this case, our recommendation is to first back up your data. Once completed, move the drive to the docking station where you applied the firmware update, and format the drive there. This should allow the drive to become readable on that docking station.Regarding your reimbursement request, we carefully evaluated it. Unfortunately, we are unable to approve the request, as the issue arose from the unnecessary application of the firmware update. While firmware updates are available on our website, applying them without verifying their intended purpose is not recommended.If you need any additional assistance, please don’t hesitate to get in touch with us—we’ll be happy to help.

I'll be looking elsewhere except so many products use the same base design and chips. I've never been in the position of being refused an old firmware to deal with a situation like this. Bizzare and antagonistic. Never encountered a drive controller that corrupted but maintained readability if you kept drive controller monogamy.

Thank-you for reading. I'm going to go for a hike.

659
 
 
The original post: /r/datahoarder by /u/Ok_Carrot_5948 on 2024-12-17 22:26:09.

Hello, I was wondering how to recover an old video from an old site and how to do this manipulation on the phone.

660
1
Drive recovery (zerobytes.monster)
submitted 3 weeks ago by [email protected] to c/[email protected]
 
 
The original post: /r/datahoarder by /u/RichTea235 on 2024-12-17 22:25:54.

Hi all,

Im lookimg for some advice on drive recovery, I had two old WD Red 6tb drives fail on me a month apart as I had not got a replacement drive yet doom! They were running a ZFS mirror.

The drives stopped reading and started clicking on power up. Is there any hope of recovery? can replacment boards be purchased and would they lickley be of any use?

Im am the UK last time i looked sending them off for recovery was very expensive.

661
 
 
The original post: /r/datahoarder by /u/Thedoc1337 on 2024-12-17 22:20:17.

Hello Everyone!

I am starting to get to the point of being limited by the available SATA power cables. I purchased a "SATA to 4x SATA" cable many moons ago and I'd like to ask what the consensus is right now for this kind of cable as I might need it soon.

What other alternatives are there? Even with a modular PSU it's not like you can plug THAT many SATA cables

662
 
 
The original post: /r/datahoarder by /u/macropus on 2024-12-17 22:10:34.

all of the pluralsight decryptors have been taken down and the ones I could find no longer work. After decrypting the .psv file the mp4 is unplayable

663
 
 
The original post: /r/datahoarder by /u/lie2w on 2024-12-17 22:04:19.

Hi,

I'm looking at moving from unRAID. I'm looking at omv+mergerfs+snapraid. I want to use pydio cells with it's on file structure. I want to use a cache SSD too. I was wondering if I can make this happen with mergerfs and pydio.

664
 
 
The original post: /r/datahoarder by /u/Super-Advance6743 on 2024-12-17 21:57:11.

I'm currently making a fileserver/NAS and want help making sure my backup solutions make sense. For my NAS I have 2 6tb hard drives in Raid 0, for the purposes of speed and storage capacity. Most data on it (movies mainly) are pretty trivial to re-download if they are lost, so I'm not planning on backing them up. The data I do care about, my immich photos for example, will be backup up to a separate machine in my Homelab with 4 1tb drives in raid 6. This data will be less than a terabyte, and will also be backed up to a cloud provider, which one exactly I haven't decided.

I feel like this covers the bases pretty well, but I'm pretty new in the IT world and would like to double check my work.

TYIA!

665
 
 
The original post: /r/datahoarder by /u/better_life_please on 2024-12-17 21:19:04.

I recently found Toshiba Canvio Flex external HDD for storing my important files.

Previously I was planning to get a WD my passport. However the Toshiba model has caught my eyes.

Does anyone have any experience with this model or generally any Toshiba HDD?

666
 
 
The original post: /r/datahoarder by /u/dardack on 2024-12-17 21:11:38.

I've tried removing 3.3v wire, using old same power supply when it worked. Tried molex to sata, used same connectors from prior mobo. Nothing will power up these drives and recognize in bios/windows. If I try and old SSD same wires works fine. Mobo is Asus x570-plus wifi. I don't get it. Edit: oh man even old mobo won't power them i don't understand.

667
 
 
The original post: /r/datahoarder by /u/Obvious-Tourist8433 on 2024-12-17 20:49:08.

I have 3 external HDD.

3TB, 5 years old, has sector errors. Formating dosent help.

6 TB, 4 years old, dead, dosent turn on

8 TB, works, 2.5 years old

All of them are encrypted. Why do they die so fast? I turn them on like once in a month and use them for 2hours.

Is the encryption? 8-10 hours encrypting, heat... which is causing the errors?

I have a new 10 TB HDD but im not sure if I should encrypt it.

I make sure that: room temperature, no vibrations, no falling/shaking.. how do they die so fast?

668
 
 
The original post: /r/datahoarder by /u/executor-of-judgment on 2024-12-17 20:39:54.

Alright, I searched "oldest file" on this subreddit and this question has been asked a couple of times, but the most recent post was made by /u//Far_Marsupial6303 in this post 2 years ago.

So again, I'd like to ask, what's the oldest file you guys have stored and how has it survived to this day?

I have a Dell Optiplex GX260 PC in storage that's around 20 years old and STILL kicking. However, I bought it second hand in 2008, so it was already 5 years old when I bought it. That PC has almost every Linux ISO that came out in 2008 with a rating over 7 on IMDB, but with shitty bitrate in .avi format. Honestly... I've never backed up that HD because there's nothing important on it (except nostalgia) and it's a miracle it's still booting up Windows XP.

669
 
 
The original post: /r/datahoarder by /u/BraveofHeart on 2024-12-17 20:33:08.

I've had great luck with myfaveTT to pull all my favorites and likes from my TT account. I really want to pull all videos from a specific chat that I've had going for years but I'm having trouble. I've tried YT-DLP with no luck. I've been able to get all the URLs to the videos, but getting the actual files is proving to be difficult.

670
 
 
The original post: /r/datahoarder by /u/giratina143 on 2024-12-17 20:23:23.
671
 
 
The original post: /r/datahoarder by /u/luix333 on 2024-12-17 20:16:40.

Hi, noob datahoarder here looking for some help (less than a month into building my first TrueNAS server)

Hardware:

Motherboard: SABERTOOTH Z97 MARK 2/USB 3.1

PSU: Corsair CX600 (600W)

CPU: i5-4670

RAM: 32GB

Drive: 20TB EXOS X20 ST20000NM007D (SATA Port)

Other drive in pool: 20TB SkyHawk ST20000VE002 (SATA Port)

Software/Firmware:

BOIS: Version 3503 (latest)

Drive Firmware: SN06 (latest)

OS: TrueNAS Scale ElectricEel-24.10.0.2

Let me give you the timeline:

  1. I bought a used 20TB EXOS drive (still under manufacturer’s warranty)
  2. When I hooked it up, I was getting an I/O error when trying to quick wipe it. I would also get a Read SMART Data failed: Connection timed out when attempting a SMART test
  3. I thought the drive was dead (used drive) and since it was still under warranty I sent it in for an RMA
  4. Seagate sent me a new EXOS drive
  5. Hooked the new one and tried to wipe it and SMART test it- same errors
  6. After a couple of afternoons of googling I saw some people recommended disabling EPC and power balance features, so I did it with OpenSeaChest commands (and verified these features remained disabled after a power cycle)
  7. I was then able to wipe the EXOS drive and run a long SMART test with no errors
  8. I then created a 20TB Mirror Pool with this drive (everything seemed to be going fine)
  9. I’ve been able to copy files to the pool (8TB worth of data. And by the way, I have another 20TB WD external drive on a different server as backup)
  10. The issue is that sometimes while copying data to or from the pool (seemingly random), the drive gets removed from the pool (degrading it)
  11. After this happens, if I try to do a short SMART test (or wipe) I get the same errors I was getting initially
  12. Checking the drive with SeaChest shows the EPC feature as enabled but won’t let me disable it (when attempted it reports EPC Feature set might not be supported. Or EPC Feature might already be in the desired state.)
  13. Trying to disable Power Balance throws a Failed to set the Seagate Power Balance feature!
  14. A power cycle makes the EPC feature show up as disabled again, and makes the pool go back to ONLINE with the EXOS drive back into it
  15. I can copy data to and from the pool again but after a while its back to number 10

Has anyone experience something similar? Any help would be well received as I’m a total noob with TrueNAS.

Forgot to mentioned I’ve tried swapping power and SATA cables/port with the same results. At some point I also tried upgrading the firmware on the drive to the latest available.

672
 
 
The original post: /r/datahoarder by /u/FrodoSynthesis05 on 2024-12-17 19:57:06.

About a year ago, I bought this mini PC to use as my home media server (Jellyfin, etc.), and it's been working great so far. The PC has one M.2 NVMe slot and a 2.5" SATA bay. Currently, I'm running a 500GB NVMe SSD and a 1TB HDD in a single mergerfs pool.

While I know a proper NAS would be ideal, I'm looking for an efficient and budget-friendly way to add more bulk, mechanical storage to my current setup. NVMe drives with several terabytes of capacity exist but are nowhere near as cost-effective as traditional HDDs.

Drive failure isn’t a major concern since all the media stored is easily replaceable—I’m just looking for straightforward ways to expand my storage given this setup.

Any ideas? Would appreciate any recommendations!

673
 
 
The original post: /r/datahoarder by /u/ChocodiIe on 2024-12-17 19:30:57.
674
 
 
The original post: /r/datahoarder by /u/Mysterious_Jelly_943 on 2024-12-17 19:29:58.

So for the last year i been collecting thigs off the internet mostly because i do t have internet or electricity for large portions of the day. Im off grid pretty much living off a generator running a couple times a day. But i have 1 gbit internet connection when the electricity is on.

Basically i need to figure out the most cost efficient way to for the biggest basically external hdd i can get. I dont need a nas i dont need something thets going to be on all the time and i dont need it to connect by wifi to anything.

I need something that connects thru usb 3 and basically works like an external. The data isnt so important that i cant get it again for the most part so im not super worried about a bunch of backup failsafes i just need a good way to store data.

So im thinking is mh next step to just buy a 20tb wd my book. Or is there a better way to house internal terabytes and what would be more cost effective in the long run. Or is there some other option im missing.

And i dont want to put internal hdds in to my computer because i want to access it on my laptop thru usb.

Also should i worry if it spends most of its time off should hdds be on all the time?

675
 
 
The original post: /r/datahoarder by /u/Thin-Try5917 on 2024-12-17 18:56:05.

Hey there, I'm currently planning on building a new NAS. I already have a case (Intertech 30240, got it for free from work), but I'm not sure what motherboard I should use for this.

I need something that has at least 2 full size pcie slots, preferably at least 8 ram slots and ECC slots

while being ATX format. Any recommendations?

view more: ‹ prev next ›