I'm always backing up with SyncThing in realtime, but every week I do an off-site type of tarball backup that isn't within the SyncThing setup.
I run Borg nightly, backing up the majority of the data on my boot disk, incl docker volumes and config + a few extra folders.
Each individual archive is around 550gb, but because of the de-duplication and compression it's only ~800mb of new data each day taking around 3min to complete the backup.
Borgs de-duplication is honestly incredible. I keep 7 daily backups, 3 weekly, 11 monthly, then one for each year beyond that. The 21 historical backups I have right now RAW would be 10.98tb of data. After de-duplication and compression it only takes up 407.98gb on disk.
With that kind of space savings, I see no reason not to keep such frequent backups. Hell, the whole archive takes up less space than one copy of the original data.
+1 for borg
Original size Compressed size Deduplicated size
This archive: 602.47 GB 569.64 GB 15.68 MB All archives: 16.33 TB 15.45 TB 607.71 GB
Unique chunks Total chunks
Chunk index: 2703719 18695670
Thanks for sharing the details on this, very interesting!
Once every 24 hours.
Yep. Even if the data I'm backing up doesn't really change that often. Perhapas I should start to back up files from my laptop and workstation too. Nothing too important is stored only on those devices, but reinstalling and reconfiguring everything back is a bit of a chore.
Backups???
Raid is a backup.
That is what the B in RAID stands for.
Just like the “s” in IoT stands for “security”
🤣
What's the second B stand for?
Beets.
Or bears.
Or buttsex.
It’s context dependent, like “cool”.
cool
If Raid is backup, then Unraid is?
Proxmox servers are mirrored zpools, not that RAID is a backup. Replication between Proxmox servers every 15 minutes for HA guests, hourly for less critical guests. Full backups with PBS at 5AM and 7PM, 2 sets apiece with one set that goes off site and is rotated weekly. Differential replication every day to zfs.rent. I keep 30 dailies, 12 weeklys, 24 monthly and infinite annuals.
Periodic test restores of all backups at various granularities at least monthly or whenever I'm bored or fuck something up.
Yes, former sysadmin.
This is very similar to how I run mine, except that I use Ceph instead of ZFS. Nightly backups of the CephFS data with Duplicati, followed by staggered nightly backups for all VMs and containers to a PBS VM on a the NAS. File backups from unraid get sent up to CrashPlan.
Slightly fewer retention points to cut down on overall storage, and a similar test pattern.
Yes, current sysadmin.
I would like to play with ceph but I don't have a lot of spare equipment anymore, and I understand ZFS pretty well, and trust it. Maybe the next cluster upgrade if I ever do another one.
And I have an almost unhealthy paranoia after see so many shitshows in my career, so having a pile of copies just helps me sleep at night. The day I have to delve into the last layer is the day I build another layer, but that hasn't happened recently. PBS dedup is pretty damn good so it's not much extra to keep a lot of copies.
I do not as I cannot afford the extra storage required to do so.
Daily backups here. Storage is cheap. Losing data is not.
I use Duplicati for my backups, and have backup retention set up like this:
Save one backup each day for the past week, then save one each week for the past month, then save one each month for the past year.
That way I have granual backups for anything recent, and the further back in the past you go the less frequent the backups are to save space
rsync from ZFS to an off-site unraid every 24 hours 5 times a week. on the sixth day it does a checksum based rsync which obviously means more stress so only do it once a week. the seventh day is reserved for ZFS scrubbing every two weeks.
Right now, I have a cron job set to run on Monday and Friday nights, is this too frequent?
Only you can answer this. How many days of data are you prepared to lose? What are the downsides of running your backup scripts more frequently?
Backup all of my proxmox-LXCs/VMs to a proxmox backup server every night + sync these backups to another pbs in another town. A second proxmox backup every noon to my nas. (i know, 3-2-1 rule is not reached...)
Every hour, automatically
Never on my Laptop, because I'm too lazy to create a mechanism that detects when it's possible.
I just tell it to back up my laptops every hour anyway. If it’s not on, it just doesn’t happen, but it’s generally on enough to capture what I need.
I have
- Unraid back up it's USB
- Unraid appears gets backed up weekly by a community applications (CA app backup) and I use rclone to back it up to an old box account (100GB for life..) I did have it encrypted but seems I need to fix that..
- Parity drive on my Unraid (8TB)
- I am trying to understand how to use Rclone to back up my photos to Proton Drive so that's next.
Music and media is not too important yet but I would love some insight
Nextcloud data daily, same for the docker configs. Less important/rarely changing data once per week. Automatic sync to NAS and online storage. Irregular and manual sync to an external disk.
7 daily backups, 4 weekly backups, "infinite" monthly backups retained (until I clean them up by hand).
Boils down to how much are you willing to lose? Personally I do weekly
I classify the data according to its importance (gold, silver, bronze, ephemeral). The regularity of the zfs snapshots (15 minutes to several hours) and their retention time (days to years) on the server depends on this. I then send the more important data that I cannot restore or can only restore with great effort (gold and silver) to another server once a day. For bronze, the zfs snapshots and a few days of storage time on the server are enough for me, as it is usually data that I can restore (build artifacts or similar) or is simply not that important. Ephemeral is for unimportant data such as caches or pipelines.
If you haven't tested your backups, you ain't got a backup.
Local zfs snap every 5 mins.
Borg backups everything hour to 3 different locations.
I've blown away docker folders of config files a few times by accident. So far I've only had to dip into the zfs snaps to bring them back.
No backup for my media. Only redundacy.
For my nextcloud data, anytime i made major changes.
Assuming it is on: Daily
Weekly full backup, nightly incremental
I have a cron job set to run on Monday and Friday nights, is this too frequent?
Only you can answer that - what is your risk tolerance for data loss?
Daily toward all my three locations:
- local on the server
- in-house but on a different device
- offsite
But not all three destinations backup the same amount of data due to storage limitations.
Most backup software allow you to configure backup retention. I think I went with some pretty standard once per day for a week. After that they get deleted, and it keeps just one per week of the older ones, for one or two months. And after that it's down to monthly snapshots. I think that aligns well with what I need. Sometimes I find out something broke the day before yesterday. But I don't think I ever needed a backup from exactly the 12th of December or something like that. So I'm fine if they get more sparse after some time. And I don't need full backups more than necessary. An incremental backup will do unless there's some technical reason to do full ones.
But it entirely depends on the use-case. Maybe for a server or stuff you work on, you don't want to lose more than a day. While it can be perfectly alright to back up a laptop once a week. Especially if you save your documents in the cloud anyway. Or you're busy during the week and just mess with your server configuration on weekends. In that case you might be alright with taking a snapshot on fridays. Idk.
(And there are incremental backups, full backups, filesystem snapshots. On a desktop you could just use something like time machine... You can do different filesystems at different intervals...)
Timeshift creates a btrfs snapshot on each boot for me. And my server gets nightly borg backups.
I continuous backup important files/configurations to my NAS. That's about it.
IMO people who redundant/backup their media are insane... It's such an incredible waste of space. Having a robust media library is nice, but there's no reason you can't just start over if you have data corruption or something. I have TB and TB of media that I can redownload in a weekend if something happens (if I even want). No reason to waste backup space, IMO.
Maybe for common stuff but some dont want 720p YTS or yify releases.
There are also some releases that don't follow TVDB aired releases (which sonarr requires) and matching 500 episodes manually with deviating names isn't exactly what I call 'fun time'.
Amd there are also rare releases that just arent seeded anymore in that specific quality or present on usenet.
So yes: Backup up some media files may be important.
Data hoarding random bullshit will never make sense to me. You're literally paying to keep media you didn't pay for because you need the 4k version of Guardians of the Galaxy 3 even though it was a shit movie...
Grab the YIFY, if it's good, then get the 2160p version... No reason to datahoard like that. It's frankly just stupid considering you're paying to store this media.
This may work for you and please continue doing that.
But I'll get the 1080p with a moderate bitrate version of whatever I can aquire because I want it in the first place and not grab whatever I can to fill up my disk.
And as I mentioned: Matching 500 episodes (e.g. Looney Tunes and Disney shorts) manually isnt fun.
Much less if you also want to get the exact release (for example music) of a certain media and need to play detective on musicbrainz.
Matching 500 episodes (e.g. Looney Tunes and Disney shorts) manually isnt fun.
With tools like TinyMediaManager, why in the absolute fuck would you do it manually?
At this point, it sounds like you're just bad at media management more than anything. 1080p h265 video is at most between 1.5-2GB per video. That means with even a modest network connection speed (500Mbps lets say) you can realistically download 5TB of data over 24 hours... You can redownload your entire media library in less than 4-5 days if you wanted to.
So why spend ~$700 on 2 20TB drives, one to be used only as redundancy, when you can simply redownload everything you previously had (if you wanted to) for free? It'll just take a little bit of time.
Complete waste of money.
I prefer Sonarr for management.
Problem is the auto matching.
It just doesnt always work.
Practical example: Looney. Tunes.and.Merrie.Melodies.HQ.Project.v2022
Some episodes are either not in the correct order or their name is deviating from how tvdb sorts it.
Your best regex/automatching can do nothing about it if Looney.Tunes.Shorts.S11.E59.The.Hare.In.Trouble.mkv
should actually be named Looney.Tunes.Shorts.S1959.E11.The.Hare.In.A.Pickle.mkv
to be automatically imported.
At some point fixing multiple hits becomes so tedious it's easier to just clear all auto-matches and restart fresh.
It becomes a whole different thing when you yourself are a creator of any kind. Sure you can retorrent TBs of movies. But you can't retake that video from 3 years ago. I have about 2 TB of photos I took. I classify that as media.
It becomes a whole different thing when you yourself are a creator of any kind.
Clearly this isn't the type of media I was referencing....
Longest interval is every 24 hours. With some more frequent like every 6 hours or so, like the ones for my game servers.
I have multiple backups (3-2-1 rule), 1 is just important stuff as a file backup, the other is a full bootable system image of everything.
With proper backup software incremental backups don't use any more space unless files are changed, so no real downside to more frequent backups.
Selfhosted
A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.
Rules:
-
Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.
-
No spam posting.
-
Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.
-
Don't duplicate the full text of your blog or github here. Just post the link for folks to click.
-
Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).
-
No trolling.
Resources:
- selfh.st Newsletter and index of selfhosted software and apps
- awesome-selfhosted software
- awesome-sysadmin resources
- Self-Hosted Podcast from Jupiter Broadcasting
Any issues on the community? Report it using the report flag.
Questions? DM the mods!