98
submitted 8 months ago* (last edited 8 months ago) by nameisnotimportant@lemmy.ml to c/linux@lemmy.ml

My dear lemmings,

I discovered Clonezilla a while ago and it still is my main tool to backup and restore the partitions I care about on my computers.

I cannot help but wonder if there are now better, more efficient alternatives or is it still a solid choice? There's nothing wrong with it, I'm just curious about others' practices and habits — and if there was newer tools or solutions available.

Thank you for your feedback, and keep your drives safe!

top 50 comments
sorted by: hot top controversial new old
[-] Toribor@corndog.social 22 points 8 months ago* (last edited 8 months ago)

Generally I just don't take clones of disk partitions anymore. They tend to take up too much disk space to keep more than one or two backups and typically require the disk to be unmounted which means it's a mostly manual process. That all but guarantees that any backup I take will be out of date when I need it most.

Instead I've found it better to take regular automated file level backups and automate the way I configure my environment so that I can quickly restore and rebuild if something goes wrong.

If I just want to be able to quickly revert a drive to a previous state or have easy point-in-time restore I manage the disk with ZFS. ZFS has a snapshotting feature which is great for this sort of thing and you can even restore snapshots to another zfs pool the same way you might restore a partition to another disk but without all the hassle of resizing things.

[-] loie@lemmy.world 13 points 8 months ago

Second for Rescuezilla, it's a Clonezilla front end with sane defaults you'd probably pick anyways.

[-] MangoPenguin 13 points 8 months ago

It's difficult to use with some odd defaults as I remember, and you have to boot into it which is annoying.

Rescuezilla seems like a good open source option, but you do still have to boot into it.

My go-to is the free Veeam Endpoint, as it just installs on the system and does full system images without needing to reboot. I'm not sure if there is a good easy to use open source equivalent to it, so far I have not found one.

[-] Godort@lemm.ee 3 points 8 months ago

I also use Veeam at home for this. It's not FOSS, but it is still free, and works really well.

[-] nix@merv.news 4 points 8 months ago

I hate that it requires a phone to download unless you already have a download link

[-] const_void@lemmy.ml 1 points 8 months ago

Is the "restore media" universal or do you have to create a new USB drive for each computer you want to restore?

[-] MangoPenguin 1 points 8 months ago

It's universal unless you need to bake in specific drivers from a machine.

[-] blackstrat@lemmy.fwgx.uk 12 points 8 months ago

I use clonezilla at work for imaging and deploying laptops. It works like a charm. Great piece of software. It's not normal backup software though.

[-] MangoKangaroo@beehaw.org 8 points 8 months ago

I still use Clonezilla to back up devices before performing reinstalls/major updates (when Timeshift isn't practical). No issues so far backing up and restoring both Windows and Linux partitions/drives.

[-] Gabu@lemmy.ml 5 points 8 months ago

The main thing about Clonezilla is that you can always rely on it working, no matter the system. The bad thing is that proprietary solutions have a lot more creature comforts.

[-] BaldProphet@kbin.social 5 points 8 months ago* (last edited 8 months ago)

I have never gotten Clonezilla to work. I don't want to call it obsolete, but... it certainly isn't intuitive, and in 2024 I expect even open source software as widely known as Clonezilla to have a straightforward interface.

For simple data backups, I use Kopia.

EDIT: Apparently there's a GUI for Clonezilla called Rescuezilla. I'll have to give it a try sometime.

[-] KnightontheSun@lemmy.world 6 points 8 months ago

Somewhat curious how CZ has never worked for you. I've used it for years and any failures it has had were fixed with tweaking some of the options. I love the tool myself, but I have also never heard of Rescuezilla so thanks for that. I think I'll give that a go next time.

[-] BaldProphet@kbin.social 2 points 8 months ago

It's been a while since I tried it, so I don't recall exactly what didn't work the last time. I think it may have been driver related.

I'm definitely going to give it another go one of these days.

[-] rtxn@lemmy.world 1 points 8 months ago

It's definitely a beast at the best of times, but the scriptability is great.

Just a few weeks ago I used it to deploy a custom Win10 image to several hundred computers in a very heterogenous environment in lite-server mode (basically PXE with extra steps). It took three of us sysadmins several days to figure out why it wasn't working, several more to write a script that could handle every scenario. Some computers had SATA SSDs, some NVMe, some both, some SSD+HDD, the block device names (sda, sdb...) were never consistent, and some reported its HDDs to sysfs as SSDs. I ended up dissecting the ISO and came up with a solution that only required a single Enter key to start and did everything else automatically.

[-] cmnybo@discuss.tchncs.de 5 points 8 months ago

I never really had a need for the features provided by Clonezilla. I've always just used dd since it's available on any Linux live disk. Unless I'm making an image for data recovery, I zero the free space and pipe the dd output through gzip to avoid wasting space.

[-] Max_P@lemmy.max-p.me 5 points 8 months ago

The big advantage of Clonezilla or using dd is you make a perfect 1:1 copy of the disk so you're pretty confident it will restore perfectly, but you need a disk of at least the same size and so on. Also perfect if you're trying to do file recovery and so on, because even corrupted or entirely unreachable data is still technically on the disk.

That's very inefficient when you have say, 5GB used of a 1TB disk, although compression will help a bit. But that's where more specialized tools comes in: what if we could only backup the actual data, and end up with a 5GB backup before compression.

That's useful and nice, but can't possibly deal with corrupted or deleted files since it'll just skip over them. The backup is only as good as all the filesystem features the archiver can encode. On Linux, tar has us pretty well covered as long as you only need relatively standard features like owners, groups. If you zip your root Linux partition you'll end up with broken ownership and permissions, because it doesn't encode ACLs and xattrs and hardlinks and whatever else. On NTFS, since it's proprietary, undocumented and a fairly complex filesystem, it's much riskier. If you backup your game library, you're probably fine, but if you want Windows to boot after a restore, you need a much more complete backup and if you don't want to take risks, whole partition backups are much safer. ntfsclone exists but I just don't trust it like I would trust tar to backup my ext4 partitions correctly.

So it's all a tradeoff. Do you want efficiency, or do you want reliability? How much of the information can you lose? Like, if you backup your C: drive on Windows but only care about your files and documents but not the Windows install itself, then it makes sense to just archive the files rather than a block copy.

So, what do you expect from your backups? The answer to that question also answers this thread.

[-] BCsven@lemmy.ca 1 points 8 months ago

for a large drive with only partial data you can make dd quicker by reducing partition size. Then fdisk to list byte size of (cylinders x bytes) in header output, and units listed for end of partition. you then use dd with bs=(cyl x bytes) count=(units+1) so dd stops at the last block of partition. once copied you can resize partition. it is how I fit a duplicate of my nas OS img on a 4 gig USB stick img for redeploy. DD is faster and then resize partitions after

[-] Max_P@lemmy.max-p.me 2 points 8 months ago

That... seems pretty unsafe. If I'm taking a backup, I definitely would avoid resizing it or making any modifications to it during the backup process. What if the resize fails and is the reason you need to restore from backup in the first place?

I guess it's a handy hack in use cases like yours, or if the backup is a convenience, but it's important to understand the risks and whether you're better off with filesystem level tools.

[-] BCsven@lemmy.ca 1 points 8 months ago

I'm sure there is potential risk, It just hasn't been a problem on my end. Just putting out as an option if you don't want to clone a 16TB drive and want to fit it on a drive that suits it.

[-] BCsven@lemmy.ca 1 points 8 months ago* (last edited 8 months ago)

Reposted from a server fault thread , author plasmapotential. note fdisk -l -u=cylinders /dev/sdX will output cylinder info if it doesnt by default.

Use dd, with the count option.

In your case you were using fdisk so I will take that approach. Your "sudo fdisk -l "produced:

Disk /dev/sda: 64.0 GB, 64023257088 bytes
255 heads, 63 sectors/track, 7783 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0000e4b5

Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *           1          27      209920   83  Linux
Partition 1 does not end on cylinder boundary.
/dev/sda2              27         525     4000768    5  Extended
Partition 2 does not end on cylinder boundary.
/dev/sda5              27         353     2621440   83  Linux
/dev/sda6             353         405      416768   83  Linux
/dev/sda7             405         490      675840   83  Linux
/dev/sda8             490         525      282624   83  Linux

The two things you should take note of are 1) the unit size, and 2) the "End" column. In your case you have cylinders that are equal to 8225280 Bytes. In the "End" column sda8 terminates at 525 (which is 525[units]16065512 = ~4.3GB)

dd can do a lot of things, such as starting after an offset, or stopping after a specific number of blocks. We will do the latter using the count option in dd. The command would appear as follows:

sudo dd if=/dev/sda of=/your_directory/image_name.iso bs=8225280 count=526

Where -bs is the block size (it is easiest to use the unit that fdisk uses, but any unit will do so long as the count option is declared in these units), and count is the number of units we want to copy (note that we increment the count by 1 to capture the last block).

[-] Revan343@lemmy.ca 1 points 8 months ago

You'd probably be better off with dd if=/dev/zero of=file.zero to zero out empty space, dd copy the whole drive, then compress the copy. I wouldn't fuck around with partitions on something I want to back up

[-] BCsven@lemmy.ca 1 points 8 months ago

For sure, but in my case I didn't want a copressed copy, I wanted a working fully functional drive image

[-] Revan343@lemmy.ca 1 points 8 months ago

Probably safer to image the whole partition then shrink the image, then. Not sure exactly how I'd go about it, but I'm sure it's not too bad, probably three arcane shell commands

[-] BCsven@lemmy.ca 1 points 8 months ago

Yes, zero spacing and compress. In my case I was building a direct clone backup for when nas might fail and I can swap drive innediately, but did not want to wait hours to dd the empty drive to an image file.

[-] mvirts@lemmy.world 5 points 8 months ago

Idk... but im sure you can use pretty much any live distro with partclone

[-] strax@kbin.social 4 points 8 months ago

yeah, partclone is the tool that clonezilla uses under the hood. i find that using partclone directly is easier.

[-] Penguincoder@beehaw.org 4 points 8 months ago

Clonezilla has its place, but not as a main backup and restoration tool. I personally don't see it as a backup tool, especially that it operates at partition level for such. What you want is you base install system and file level backups for your data (/home/) etc. For the file level backups, use something like restic. Backup what you need to go from a fresh install to a system with your data back on it. Packages can be reinstalled.

Restic is my primary backup for all my devices. If I need something more than fresh iso -> my data system, I use packer.

[-] yo_scottie_oh@lemmy.ml 1 points 8 months ago

I noticed for file level backups you mentioned something other than rsync. Any particular reason why you landed on restic instead?

[-] Penguincoder@beehaw.org 1 points 8 months ago

Because that serve different purposes. rsync is for moving data around, synchronization of such. It has no concept of point in time restoration, or snapshots (etc) that really define a backup solution. I use restic because its the proper tool for the job.

[-] yo_scottie_oh@lemmy.ml 2 points 8 months ago

point in time restoration, or snapshots

Do you mean like not just having another copy of a file, but being able to restore a specific version of a file?

[-] Penguincoder@beehaw.org 2 points 8 months ago

There's a lot more going on with restic aside from just that, but yes. So with an rsync of your home dir (for example), it's reliant on the FS to do compression and deduplication (ZFS,btrs), and/or it will still take up a lot of wasted space. Say you got ransom-wared. It's okay you have that rsync backup, but oh crap it got ransom-wared to. No more backups to try? Restic gives you snapshots for whatever increment you set and just handles it simply. You can then restore one file from any of the snaphots (history) or every single file. Restoreing 250kb vs 400TB is quite a difference. The benefits of this, are huge even beyond the fire and forget capability.

I mean, rsync handling everything via mirroring and pushed to a ZFS FS, would be sort of the same thing.

[-] PainInTheAES@lemmy.world 4 points 8 months ago

I use kopia, it's more automated and deduplicates snapshot.

[-] MangoPenguin 9 points 8 months ago

Not the same, as it doesn't make an image of the system.

[-] PainInTheAES@lemmy.world 1 points 8 months ago

Ah I missed the partitions part

[-] WalrusDragonOnABike@reddthat.com 4 points 8 months ago* (last edited 8 months ago)

Used it for cloning some laptops recently without much issue. Cloned one laptop's primary partition onto an SD card and then imaged the others no problem. Laptops were 256GBs capacity (but only like 30-60 GBs used) and the SD card was 64 GBs. Seemed pretty simple to me.

There's a lot of options for those who want to do things like deploy over a network, but I haven't messed with them seriously (I didn't have the ethernet cables to do it - wasted a bit of time trying before realizing they weren't connect to a network; maybe there's a way to connect via wifi, but I didn't see it)

[-] makeasnek@lemmy.ml 4 points 8 months ago* (last edited 8 months ago)

The fact that Linux lacks a decent system-level backup tool with a GUI is kind of a mind boggler for me. The best one I've found which gets close to this is timeshift. File-level backups can't restore your whole system state and users shouldn't be expected to remember or manually export their package lists and god knows what else. I have subsisted on file-only backups but it's really not great as a solution. Disks fail, and when they do, you inevitably have to reinstall the entire OS. It's a mess. RAID1 could theoretically prevent this, but no distro makes it easy to boot from a RAID1 setup.

Backing up the entire filesystem is not a technically complex thing, there are plenty of command-line tools to do this and some filesystems even support this concept via snapshots etc. But this has yet to be put into a useful practice for end users.

[-] vintageballs@feddit.de 2 points 8 months ago

There is btrfs-assistant, for example.

[-] corsicanguppy@lemmy.ca 1 points 8 months ago

with a GUI

Look for one without a GUI and learn its command-line, and you're done.

[-] HouseWolf@lemm.ee 3 points 8 months ago

I've used Clonezilla recently to clone my main 1tb drive aswell as a 4tb backup drive to an external HDD and both times worked fine.

It is painfully slow however but I'm not sure I could do anything about that outside of buying faster drives.

[-] Pantherina@feddit.de 3 points 8 months ago* (last edited 8 months ago)

Yes, works great! Used it to clone some windows users stuff, he thought having a dozen partitions makes sense, still no problem at all. Copied everything from HDD to bigger SSD, just worked.

You download the ISO, flash it to a usb stick (we used rufus, but dd, impression (udisks2 frontent in gtk&rust) or balena etcher should also work). The TUI is usable, has some options but the defaults seem good.

[-] electric_nan@lemmy.ml 2 points 8 months ago

Just use disk destroyer.

[-] Fredol@lemmy.world 2 points 8 months ago

Rescuezilla is much better

[-] const_void@lemmy.ml 2 points 8 months ago

Also interested in this. Currently in need of an imaging solution that's less clunky to use than Clonezilla.

[-] oleorun@real.lemmy.fan 1 points 8 months ago* (last edited 8 months ago)

MDT works well for Windows environments. Otherwise dd or Clonezilla for Linux.

[-] BCsven@lemmy.ca 2 points 8 months ago

Clonezilla or dd. if you are on GNOME you can use gnome disks and it has a create diak image, restore disk image option, if you want an img file

[-] krakenfury@lemmy.sdf.org 2 points 8 months ago

I'd recommend just scripting with rsync commands and run with cron or whatever scheduling automation. Backup locally to an external drive or orchestrate with cloud provider cli tools for something like S3.

There are some tools that probably assist with this, but it's just very few moving parts to roll your own. Clonezilla seems overkill and harder to automate, but I will admit I'm not an expert with it.

[-] Terces@lemmy.world 1 points 8 months ago

Clonezilla is the tool I use after all else has failed. I agree that it is difficult to use, but it can do things others can't. I saved quite a few of my drives with this thing. So while I try to avoid having to use it, it still belongs in my toolkit.

[-] lps@lemmy.ml 1 points 8 months ago

Rescuezilla is nice. I believe it just puts a more user friendly GUI on clonezilla

[-] utopiah@lemmy.ml 1 points 8 months ago

Others have mentioned rsync and I'd like to suggest on top of rdiff-backup but it's indeed for files, not partitions or disks. That being said IMHO if you are not managing data-centers and thus swapping entire physical disks by the bucket, you probably don't want to actually care for disks themselves.

If you genuinely have to frequently change not just data but entire systems, maybe looking at nix or cloud-init could help.

this post was submitted on 23 Feb 2024
98 points (100.0% liked)

Linux

48073 readers
660 users here now

From Wikipedia, the free encyclopedia

Linux is a family of open source Unix-like operating systems based on the Linux kernel, an operating system kernel first released on September 17, 1991 by Linus Torvalds. Linux is typically packaged in a Linux distribution (or distro for short).

Distributions include the Linux kernel and supporting system software and libraries, many of which are provided by the GNU Project. Many Linux distributions use the word "Linux" in their name, but the Free Software Foundation uses the name GNU/Linux to emphasize the importance of GNU software, causing some controversy.

Rules

Related Communities

Community icon by Alpár-Etele Méder, licensed under CC BY 3.0

founded 5 years ago
MODERATORS