Use Borg Backup. It has built-in deduplication — it works with chunks not files and will recognize identical chunks and avoid storing them multiple times. It will deduplicate your files and will find duplicated chunks even in files you didn't know had duplicates. You can continue to keep your files duplicated or clean them out, it doesn't matter, the borg backups will be optimized either way.
Here are the stats from a backup of 1 server with approx 600gig
Original size Compressed size Deduplicated size
This archive: 592.44 GB 553.58 GB 13.79 MB All archives: 14.81 TB 13.94 TB 599.58 GB
Unique chunks Total chunks
Chunk index: 2760965 19590945
13meg.... nice
jdupes is my go-to solution for file deduplication. It should be able to remove duplicate files. I don't know how much control it gives you over which duplicate to remove though.
It is so so fast
Be aware that halfway decent backup solutions dedupe. Which is not to say you shouldn't clean your shit up. I vote https://github.com/qarmin/czkawka.
make sure to make the first backup before you use deduplication. just in case it goes sideways
Restic
I believe zfs has deduplication built in if you want a separate backup partition. Not sure about its reliability though. Personally I just have a script that keeps a backup and an oldbackup, and they are both fairly small. I keep a file in my home dir called excluded for things like linux ISOs that don't need backed up.
BTRFS also supports deduplication, but not automatically. duperemove
will do it and you can set it up on a cron task if you want.
btrbk
As said previously, Borg is a full dedplicating incremental archiver complete with compression. You can use relative paths temporarily to build up your backups and a full backup history, then use something like pika to browse the archives to ensure a complete history.
I did not ask for a backup solution, but for a deduplication tool
Tbf you did start your post with
I’m in the process of starting a proper backup
So you’re going to end up with at least a few people talking about how to onboard your existing backups into a proper backup solution (like borg). Your bullet points can certainly probably be organized into a shell script with sync, but why? A proper backup solution with a full backup history is going to be way more useful than dumping all your files into a directory and renaming in case something clobbers. I don’t see the point in doing anything other than tarring your old backups and using borg import-tar
(docs). It feels like you’re trying to go from one half-baked, odd backup solution to another, instead of just going with a full, complete solution.
Use rm with the redundant files option.
rm -rf /
hardlink
Most underrated tool that is frequently installed on your system. It recognizes BTRFS. Be aware that there are multiple versions of it in the wild.
It is unattended.
Is hardlink the same as ln without the -s switch?
I tried reading the page but it's not clear
ln
creates a hard link, ln -s
creates a symlink.
So, yes, the hardlink tool effectively replaces a file's duplicates with hard links automatically, as if you'd used ln
manually.
Ahh! Cool! Thanks for the explanation.
This will indeed save space but I don't want links either. I unique files
Instead of trying to parse the old stuff, could you just run something like borg and then delete the old copypaste backup? Or are there other files there that you need to go through? I ask because I went through a similar thing switching my backups from rsync to borg
I had multiple systems which at some point were syncing with syncthing but over time I stopped using my desktop computer and syncthing service got unmaintained. I've had to remove the ssd of the old desktop so I yoinked the home directory and saved it into my laptop. As you can probably tell, a lot of stuff got duplicated and a lot of stuff got diverged over time. My idea is that I would merge everything into my laptops home directory, and rather then look at the diverged files manually as it would be less work. I don't think doing a backup with all my redundant files will be a good idea as the initial backup will include other backups and a lot of duplicated files.
Ah ok gotcha.
I have exactly the same problem.
I got as far as using fdupe
to identify duplicates and delete the extras. It was slow.
Thinking about some of the other comments... If you use a tool to create hardlinks first, then one could then traverse the entire tree and deleting a file if it has more than one hardlink. The two phases could be done piecemeal and are cancelable and restartable.
That sounds doable. I would however not trust my self to code something bug free on the first go xD
Backup backup backup! If you have btrfs them just take a snapshot first: instantly.
One could do a non-destructive rename first. E.g. prepend deleteme.
to the file name, sanity check it, then 'rollback' by renaming back without the prefix or commit and delete anything with the prefix.
Take a look at Borg. It is a very well suited backup tool that has deduplication.
I don't actually know but I bet that's relatively costly so I would at least try to be mindful of efficiency, e.g
- use
find
to start only with large files, e.g > 1Gb (depends on your own threshold) - look for a "cheap" way to find duplicates, e.g exact same size (far from perfect yet I bet is sufficient is most cases)
then after trying a couple of times
- find a "better" way to avoid duplicates, e.g SHA1 (quite expensive)
- lower the threshold to include more files, e.g >.1Gb
and possibly heuristics e.g
- directories where all filenames are identical, maybe based on locate/updatedb that is most likely already indexing your entire filesystems
Why do I suggest all this rather than a tool? Because I be a lot of decisions have to be manually made.
fclones https://github.com/pkolaczk/fclones looks great but I didn't use it so can't vouch for it.
I was using Radarr/Sonarr to download files via qBittorrent and then hardlink them to an organized directory for Jellyfin, but I set up my container volume mappings incorrectly and it was only copying the files over, not hardlinking them. When I realized this, I fixed the volume mappings and ended up using fclones to deduplicate the existing files and it was amazing. It did exactly what I needed it to and it did it fast. Highly recommend fclones.
I've used it on Windows as well, but I've had much more trouble there since I like to write the output to a file first to double check it before cat
ting the information back into fclones to actually deduplicate the files it found. I think running everything as admin works but I don't remember.
if you use rmlint
as others suggested here is how to check for path of dupes
jq -c '.[] | select(.type == "duplicate_file").path' rmlint.json
FWIW just did a quick test with rmlint
and I would definitely not trust an automated tool to remove on my filesystem, as a user. If it's for a proper data filesystem, basically a database, sure, but otherwise there are plenty of legitimate duplication, e.g ./node_modules
, so the risk of breaking things is relatively high. IMHO it's better to learn why there are duplicates on case by case basis but again I don't know your specific use case so maybe it'd fit.
PS: I imagine it'd be good for a content library, e.g ebooks, ROMs, movies, etc.
Fs-lint will do some of these things once you configure its actions
I use rsync and ZFS snapshots
For backup or for file-level reduplication?
If the latter, how?
1 rsync allows to sync hardlinks correctly
2 zfs has pretty fast (zfs set dedup=edonr,verify) block level duplication where block size is 1MB (zfs set blocksize=1M).
3 in reality I tried to achieve proper data structure but it was way too time consuming so I couldn't do any work other than that, thus I established zfs as a history backtrack where I can rollback to something very important what I accidentally can delete, thus using ZFS and all aforementioned its benefits
What about folders? Because sometimes when you have duplicated folders (sometimes with a lot of nested subfolders), a file deduplicator will take forever. Do you know of a software that works with duplicate folders?
What do you mean that a file deduplication will take forever if there are duplicated directories? That the scan will take forever or that manual confirmation will take forever?
That manual confirmation will take forever
Linux
From Wikipedia, the free encyclopedia
Linux is a family of open source Unix-like operating systems based on the Linux kernel, an operating system kernel first released on September 17, 1991 by Linus Torvalds. Linux is typically packaged in a Linux distribution (or distro for short).
Distributions include the Linux kernel and supporting system software and libraries, many of which are provided by the GNU Project. Many Linux distributions use the word "Linux" in their name, but the Free Software Foundation uses the name GNU/Linux to emphasize the importance of GNU software, causing some controversy.
Rules
- Posts must be relevant to operating systems running the Linux kernel. GNU/Linux or otherwise.
- No misinformation
- No NSFW content
- No hate speech, bigotry, etc
Related Communities
Community icon by Alpár-Etele Méder, licensed under CC BY 3.0