Honestly, I don't. The vast majority of my data is just stuff like Linux ISOs that I could download again. Important documents and stuff like that take up so little space that I just keep them in Google Drive. Most of my personal project work is on GitHub. And while neither of those are technically backups, it's not a tragic loss if I accidentally delete everything.
Selfhosted
A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.
Rules:
-
Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.
-
No spam posting.
-
Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.
-
Don't duplicate the full text of your blog or github here. Just post the link for folks to click.
-
Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).
-
No trolling.
Resources:
- selfh.st Newsletter and index of selfhosted software and apps
- awesome-selfhosted software
- awesome-sysadmin resources
- Self-Hosted Podcast from Jupiter Broadcasting
Any issues on the community? Report it using the report flag.
Questions? DM the mods!
Yeah it's weird, 10+ years ago or so I feel like I had SO MUCH DATA and it was always an issue. Now I really don't have anything. A few gigs of photos I guess, some various files, but that's it. I guess I used to have a lot more media like movies and porn, which I don't really need anymore.
I have a borg server in the office that takes backups of all my servers. Each server stores their applications backup that gets pulled into the repo. On top of that, the borg server pushes the backup to rsync.net.
All of this is monitored by my Zabbix server
Define which data is from value. I got 68TB of data but realistically only 3 TB are from such value I maintain several copies (Raspi + SSD) and online backup. The rest of data is stored on a cheap server build at a family member and synchronized twice a year. Make sure your systems and drives are all encrypted. And test your backups and redeployment strategy.
Restic to Wasabi.
I used to use Backblaze B2, until I did the maths on how much it would cost me to restore. B2 storage is cheap yes, but the egress is so fucking expensive. It would have cost me hundreds.
Wasabi storage is equally cheap, and restoring won't cost me an arm and a leg.
I use the following scripts for Restic: https://gitlab.com/finewolf-projects/restic-wrapper-scripts
wasabi is cheaper than B2 unless...
- you store less than 1TB (they charge for a minimum of 1TB even if you store nothing)
- you pay for any data you upload for 90 days minimum.. so if you upload 500GB and then delete it within 90 days, you're paying for it for the duration anyway..
- You can only download the same amount as you store in a month without incurring egress costs.
The 3 points above are how they can not charge egress for the majority of people.
I use restic/borg (depending on servers) and push to a bunch of S3 buckets on Backblaze. This applies to my desktop, my NAS and in general my non-Kubernetes data.
For Kubernetes I wrote a small tool that...well does the same for PVCs. Packs up the data with restic (soon I hope to migrate to rustic, once the library gets polished) and pushes to Backblaze.
To give an idea of the pricing, for 730GB, with daily backups or more, I pay approximately $5 a month.
Restic is fantastic. It's just one binary, has support for various cloud services (including Backblaze which I use as well), snapshots which can be mounted with FUSE. It's really quite useful. Borg I believe is similar?
Either way, I feel like today there is no reason to use awkward rsync solutions when better tools are out that have proven themselves.
Ah yes automated backups, on my to-do which I'll hopefully do before a failure (famous last words). People talking about backblaze b2. I just looked. Why not use the personal one? The one computer would just be the Nas if using it for cold storage/redundancy?
To copy a comment from reddit:
HTWingNut:
Backblaze Personal only works with Windows PC's and Mac, and drives that are physically connected to the computer. No VM's, no network drives/hardlinks/symlinks, etc. You have to use their software to backup too. As someone else noted, for recovery you can grab files in 500GB chunks as a zip, or 8TB drive mailed to you (free of charge up to 5 per year). Data needs to be retained on your local drives otherwise it will delete them from their servers after 30 days unless you upgrade to their 1 year retention plan.
I have a Windows PC that is on 24/7 for a number of things, and I just put a hard drive in there that I backup my most important NAS files to that, and it gets backed up to Backblaze Personal.
Backblaze Personal is cheap and I see the appeal, but you have to understand and live with those caveats for "unlimited" backup.
I use B2 with rclone and just backup "important" stuff on my NAS with cron jobs. I guess you could have rclone move the "important" stuff from NAS to a "burner" PC which uses Backblaze Personal.
I don't have enough data to warrant all that so I use B2 for now and I have around 50GB of data so the price is cheap
i use duplicati to back up configs and data for docker containers to 2 cloud services. my 8 TB server is almost maxed. i need funds to buy a backup for that and expand.
I know synology (and others probably) have an app where you can back up your data to your friends NAS and vice versa, but that's taking up their storage too and cost for HDD/SSD may be prohibitive
Do you have any family or friends that are willing to let a small NAS sit around somewhere? Or host a friends backup and return they host your backup? For me, this approach works well and is probably as cheap as it can get. To just backup some data over the internet, any cheap old NAS will do. I have an old NAS sitting at my parents and just manually turn it on when I'm visiting. A small startup script runs rsync without further interaction and shuts down when finished.
2 spare drives and a safe deposit box ($10/yr). Swap the bank box once a month or so. My upstream bandwidth isn't enough to make cloud saves practical, and if anything happens, retrieving the drive is faster than shipping a replacement, nevermind restoring from cloud.
Of course, my system is a few TB, not a few dozen.
My home "offsite" backup is a second NAS at my parents house. I plan on getting two identical NASes with identical storage setup and let them replicate themselves automatically, but no money for that now.
I don't do 3 2 1, I do 3 1 1
I have a 2 x 8TB in RAID1 NAS at a family members house and I also have an OVH dedicated server with 2 x 480GB in RAID1 and 2 x 8TB in RAID1. I use rclone for my backups and keep deleted files for 30 days on the NAS and 120 days on the OVH dedicated server. Both the NAS and server connect back to my home network using WireGuard.
The OVH dedicated server also runs numerous virtual machines that host websites as well as backups of my netbox and mediawiki instance I run at home(they sync nightly).
Backblaze B2 sync from my NAS. All my client computers use ayncthing or Nextcloud to the NAS.
Like many others here, I back up all important data from my Truenas to B2. Have a couple hundred gigs. It's like 2 bucks a month.
Backblaze using qnap backup software
I use syncthing to synchronise my collection of important stuff between my laptop, local server and VPS. My laptop then gets backed-up to an USB SSD using Time Machine. Granted, it’s not a proper backup, but it’s better than nothing.
For my photo collection I burned it to a BluRay (M-disc) and asked my SO to store it at work.
- Backblaze B2
- External hard drives at a friend's house
- M-Discs, copies at home and a friend's house
Hetzner storage box, and just rsync. It takes care of snapshotting via auto snapshots. I costs like $20 for 1T I think. But there are cheaper options yoo
I’ve got two synology NASes. My current backup strategy is to backup everything between the two NASes so I have two copies of everything locally. Then I back up documents, photos, pretty much everything except TV shows and movies to Backblaze.
I have a local backup only drive for pictures and critical laptop backups and use rsync nightly. I also do rsync nightly to Backblaze for pictures. Figure if I can grab the drive I will have it stored offsite.
Duplicati to Hetzner storage. Working on replacing duplaicati with Borg. Because mono.
Backblaze, move everything u want to an external attached hdd and then back that up with the backblaze client
I use Borg + borgmatic (although I may be a little biased there...) and backup to BorgBase and rsync.net. When figuring out where your "cheapish" off-site backup solution should be, you need to take into account: How much data you want to store, how much you expect it to be deduplicated, how much you expect it to grow, and your needs for retrieval and egress. See some of the other comments here on some of the pros/cons of various providers.
Also, it should be said that Borg doesn't directly support non-SSH cloud storage providers, although you could always backup with Borg locally and then rclone that to a cloud provider. Restic does support non-SSH cloud storage directly, but then no borgmatic. So, 🤷.
Locally I have a mix of SnapRAID and mirroring across 2 servers. Then I use restic to backup select directories/files into Backblaze B2 cloud storage.
I've never considered off-site storage. You got me thinking
I run a Synology NAS and use their backup solution Synology C2. It's e2e encrypted, pretty affordable and well integrated into the system, so it was basically a one-click setup. Also, they keep old versions for 30 days, but only the most recent versions count towards your quota, which makes the space usage very predictable.
I use S3 sync via the cli and use lifecycle policies to manage number of snapshots and deletion.
Some cool options for moving files to different tiers like cold and glacier but I don't know enough about it or the retrieval costs to use it just yet
I won't go into my solution here, but the only tip I'll give you is don't use cloud based storage. Restoring is slow if we're talking terrabytes, it's expensive compared to buying a disk and your data is never truly safe.
Buy another drive, backup to it and have it on a rotation schedule. I keep my "offsite" backup in the boot of my car. If I'm not at home I'm usually away with my car.
Storing a hard disk in a car or any other moving vehicle is highly not recommended. The vibration will kill your drive. There are stories of companies moving drives on trolleys across their carpark to find the data has been lost.
they are in a shock proof and waterproof containers
B2 from my NAS with duplicacy. Set it up with healchecks.io to let me know it if stops, and it works without a flaw
restic hourly backup to external SSD + idrive e2
Raspberry Pi - USB HDD - borg backup - parents home :)
Crashplan can't tell the difference between local folders and NFS mounts, and they have an unlimited size backup plan per device for like $10/month. I have 1 device with NFS mounts from many desktops and my Nas. About 9TB.
Systems backup to NAS via restic
NAS restic repo is stored online on a dedicated internal drive, which is mirrored to an external drive (normally kept offline in a safe when not bein synced), and offsite is a 3rd copy to Backblaze B2 using rclone.
Urbackup for workstations, and Proxmox Backup Server for my 2 Proxmox hosts.
Both configured with borg backups to rsync.net.
I haven't configured it yet, but I am planning on using rsync.net for my Synology as well (Which is mostly archive storage)