datahoarder
Who are we?
We are digital librarians. Among us are represented the various reasons to keep data -- legal requirements, competitive requirements, uncertainty of permanence of cloud services, distaste for transmitting your data externally (e.g. government or corporate espionage), cultural and familial archivists, internet collapse preppers, and people who do it themselves so they're sure it's done right. Everyone has their reasons for curating the data they have decided to keep (either forever or For A Damn Long Time). Along the way we have sought out like-minded individuals to exchange strategies, war stories, and cautionary tales of failures.
We are one. We are legion. And we're trying really hard not to forget.
-- 5-4-3-2-1-bang from this thread
view the rest of the comments
What’s the best way to make an offsite backup for 42tb at this point with 20mbps of bandwidth? It would take over 6 months to upload while maxing out my connection.
Maybe I could sneakernet an initial backup then incrementally replicate?
Outside my depth but I'll give it a stab. Identify what data is important, (is the full 42Tb needed?). Can the data be split into easier to handle chunks?
If it is, then I personally do an initial sneakernet to get the fist set of data over. Then mirror different on a regular basis.