107

About a year ago I switched to ZFS for Proxmox so that I wouldn't be running technology preview.

Btrfs gave me no issues for years and I even replaced a dying disk with no issues. I use raid 1 for my Proxmox machines. Anyway I moved to ZFS and it has been a less that ideal experience. The separate kernel modules mean that I can't downgrade the kernel plus the performance on my hardware is abysmal. I get only like 50-100mb/s vs the several hundred I would get with btrfs.

Any reason I shouldn't go back to btrfs? There seems to be a community fear of btrfs eating data or having unexplainable errors. That is sad to hear as btrfs has had lots of time to mature in the last 8 years. I would never have considered it 5-6 years ago but now it seems like a solid choice.

Anyone else pondering or using btrfs? It seems like a solid choice.

top 50 comments
sorted by: hot top controversial new old
[-] cmnybo@discuss.tchncs.de 59 points 1 month ago

Don't use btrfs if you need RAID 5 or 6.

The RAID56 feature provides striping and parity over several devices, same as the traditional RAID5/6. There are some implementation and design deficiencies that make it unreliable for some corner cases and the feature should not be used in production, only for evaluation or testing. The power failure safety for metadata with RAID56 is not 100%.

https://btrfs.readthedocs.io/en/latest/btrfs-man5.html#raid56-status-and-recommended-practices

[-] lurklurk@lemmy.world 11 points 1 month ago

Or run the raid 5 or 6 separately, with hardware raid or mdadm

Even for simple mirroring there's an argument to be made for running it separately from btrfs using mdadm. You do lose the benefit of btrfs being able to automatically pick the valid copy on localised corruption, but the admin tools are easier to use and more proven in a case of full disk failure, and if you run an encrypted block device you need to encrypt half as much stuff.

[-] Eideen@lemmy.world 4 points 1 month ago

I have no problem running it with raid 5/6. The important thing is to have a UPS.

load more comments (1 replies)
load more comments (7 replies)
[-] Bookmeat@lemmy.world 58 points 1 month ago

A bit of topic; am I the only one that pronounces it "butterface"?

[-] wrekone@lemmy.dbzer0.com 59 points 1 month ago
[-] myersguy@lemmy.simpl.website 37 points 1 month ago

You son of a bitch, I'm in.

[-] uhmbah@lemmy.ca 18 points 1 month ago

Ah feck. Not any more.

[-] prole 6 points 1 month ago

Isn't it meant to be like "better FS"? So you're not too far off.

[-] Asparagus0098@sh.itjust.works 10 points 1 month ago

i call it "butter FS"

load more comments (1 replies)
[-] combatfrog@sopuli.xyz 2 points 1 month ago

Similarly, I read bcachefs as BCA Chefs 😅

load more comments (2 replies)
[-] vividspecter@lemm.ee 38 points 1 month ago

No reason not to. Old reputations die hard, but it's been many many years since I've had an issue.

I like also that btrfs is a lot more flexible than ZFS which is pretty strict about the size and number of disks, whereas you can upgrade a btrfs array ad hoc.

I'll add to avoid RAID5/6 as that is still not considered safe, but you mentioned RAID1 which has no issues.

[-] TwiddleTwaddle 3 points 1 month ago

I've been vaguely planning on using btrfs in raid5 for my next storage upgrade. Is it really so bad?

[-] vividspecter@lemm.ee 9 points 1 month ago

Check status here. It looks like it may be a little better than the past, but I'm not sure I'd trust it.

An alternative approach I use is mergerfs + snapraid + snapraid-btrfs. This isn't the best idea for a system drive, but if it's something like a NAS it works well and snapraid-btrfs doesn't have the write hole issues that normal snapraid does since it operates on r/o snapshots instead of raw data.

load more comments (1 replies)
[-] avidamoeba@lemmy.ca 25 points 1 month ago

You shouldn't have abysmal performance with ZFS. Something must be up.

[-] possiblylinux127@lemmy.zip 11 points 1 month ago* (last edited 1 month ago)

What's up is ZFS. It is solid but the architecture is very dated at this point.

There are about a hundred different settings I could try to change but at some point it is easier to go btrfs where it works out of the box.

[-] prenatal_confusion@feddit.org 21 points 1 month ago

Since most people with decently simple setups don't have the described problem likely somethings up with your setup.

Yes ifta old and yes it's complicated but it doesn't have to be to get a decent performance.

[-] possiblylinux127@lemmy.zip 3 points 1 month ago

I have been trying to get ZFS working well for months. Also I am not the only one having issues as I have seen lots of other posts about similar problems.

[-] prenatal_confusion@feddit.org 3 points 1 month ago

I don't doubt that you have problems with your setup. Given the large number of (simple) zfs setups that are working flawlessly there are a bound to be a large number of issues to be found on the Internet. People that are discontent voice their opinion more often and loudly compared to the people that are satisfied.

load more comments (1 replies)
[-] avidamoeba@lemmy.ca 5 points 1 month ago

What seems dated in its architecture? Last time I looked at it, it struck me as pretty modern compared to what's in use today.

[-] possiblylinux127@lemmy.zip 3 points 1 month ago* (last edited 1 month ago)

It doesn't share well. Anytime anything IO heavy happens the system completely locks up.

That doesn't happen on other systems

load more comments (7 replies)
load more comments (1 replies)
[-] sem 17 points 1 month ago

Btrfs came default with my new Synology, where I have it in Synology's raid config (similar to raid 1 I think) and I haven't had any problems.

I don't recommend the btrfs drivers for windows 10. I had a drive using this and it would often become unreachable under load, but this is more a Windows problem than a problem with btrfs

[-] domi@lemmy.secnd.me 16 points 1 month ago

btrfs has been the default file system for Fedora Workstation since Fedora 33 so not much reason to not use it.

[-] exu@feditown.com 16 points 1 month ago

Did you set the correct block size for your disk? Especially modern SSDs like to pretend they have 512B sectors for some compatibility reason, while the hardware can only do 4k sectors. Make sure to set ashift=12.

Proxmox also uses a very small volblocksize by default. This mostly applies to RAIDz, but try using a higher value like 64k. (Default on Proxmox is 8k or 16k on newer versions)

https://discourse.practicalzfs.com/t/psa-raidz2-proxmox-efficiency-performance/1694

load more comments (1 replies)
[-] suzune@ani.social 10 points 1 month ago

The question is how do you get a bad performance with ZFS?

I just tried to read a large file and it gave me uncached 280 MB/s from two mirrored HDDs.

The fourth run (obviously cached) gave me over 3.8 GB/s.

[-] possiblylinux127@lemmy.zip 2 points 1 month ago* (last edited 1 month ago)

I have never heard of anyone getting those speeds without dedicated high end hardware

Also the write will always be your bottleneck.

[-] Moonrise2473@feddit.it 5 points 1 month ago

I have similar speeds on a truenas that I installed on a simple i3 8100

load more comments (5 replies)
[-] suzune@ani.social 5 points 1 month ago* (last edited 1 month ago)

This is an old PC (Intel i7 3770K) with 2 HDDs (16 TB) attached to onboard SATA3 controller, 16 GB RAM and 1 SSD (120 GB). Nothing special. And it's quite busy because it's my home server with a VM and containers.

load more comments (5 replies)
[-] zarenki@lemmy.ml 10 points 1 month ago

I've been using single-disk btrfs for my rootfs on every system for almost a decade. Great for snapshots while still being an in-tree driver. I also like being able to use subvolumes to treat / and /home (maybe others) similar to separate filesystems without actually being different partitions.

I had used it for my NAS array too, with btrfs raid1 (on top of luks), but migrated that over to ZFS a couple years ago because I wanted to get more usable storage space for the same money. btrfs raid5 is widely reported to be flawed and seemed to be in purgatory of never being fixed, so I moved to raidz1 instead.

One thing I miss is heterogenous arrays: with btrfs I can gradually upgrade my storage one disk at a time (without rewriting the filesystem) and it uses all of my space. For example, two 12TB drives, two 8TB drives, and one 4TB drive adds up to 44TB and raid1 cuts that in half to 22TB effective space. ZFS doesn't do that. Before I could migrate to ZFS I had to commit to buying a bunch of new drives (5x12TB not counting the backup array) so that every drive is the same size and I felt confident it would be enough space to last me a long time since growing it after the fact is a burden.

[-] possiblylinux127@lemmy.zip 3 points 1 month ago

Btrfs Raid 10 reportedly is stable

[-] stuner@lemmy.world 2 points 1 month ago

With version 2.3 (currently in RC), ZFS will at least support RAIDZ expansion. That should already help a lot for a NAS usecase.

[-] nichtburningturtle@feddit.org 8 points 1 month ago

Didn't have any btrfs problems yet, infact cow saved me a few times on my desktop.

load more comments (3 replies)
[-] SRo@lemmy.dbzer0.com 8 points 1 month ago

One time I had a power outage and one of the btrfs hds (not in a raid) couldn't be read anymore after reboot. Even with help from the (official) btrfs mailinglist It was impossible to repair the file system. After a lot of low level tinkering I was able to retrieve the files, but the file system itself was absolutely broken, no repair process was possible. I since switched to zfs, the emergency options are much more capable.

[-] possiblylinux127@lemmy.zip 6 points 1 month ago

Was that less than 2 years ago? Were you using kernel 5.15 or newer?

[-] SRo@lemmy.dbzer0.com 6 points 1 month ago

Yes that was may/june 23 and I was on a 6.x kernel

[-] just_another_person@lemmy.world 7 points 1 month ago

If it didn't give you problems, go for it. I've run it for years and never had issues either.

[-] Moonrise2473@feddit.it 6 points 1 month ago

One day I had a power outage and I wasn't able to mount the btrfs system disk anymore. I could mount it in another Linux but I wasn't able to boot from it anymore. I was very pissed, lost a whole day of work

[-] Philippe23@lemmy.ca 3 points 1 month ago
[-] Moonrise2473@feddit.it 2 points 1 month ago

I think 5 years ago, on Ubuntu

load more comments (1 replies)
[-] fmstrat@lemmy.nowsci.com 5 points 1 month ago

What kind of disks, and how is your ZFS set up? Something seems amis here.

[-] SendMePhotos@lemmy.world 3 points 1 month ago

I run it now because I wanted to try it. I haven't had any issues. A friend recommended it as a stable option.

[-] bruhduh@lemmy.world 3 points 1 month ago

Raid 5/6, only bcachefs will solve it

load more comments (6 replies)
[-] tripflag@lemmy.world 3 points 1 month ago

Not proxmox-specific, but I've been using btrfs on my servers and laptops for the past 6 years with zero issues. The only times it's bugged out is due to bad hardware, and having the filesystem shouting at me to make me aware of that was fantastic.

The only place I don't use zfs is for my nas data drives (since I want raidz2, and btrfs raid5 is hella shady) but the nas rootfs is btrfs.

[-] catloaf@lemm.ee 2 points 1 month ago

Meh. I run proxmox and other boot drives on ext4, data drives on xfs. I don't have any need for additional features in btrfs. Shrinking would be nice, so maybe someday I'll use ext4 for data too.

I started with zfs instead of RAID, but I found I spent way too much time trying to manage RAM and tuning it, whereas I could just configure RAID 10 once and be done with it. The performance differences are insignificant, since most of the work it does happens in the background.

You can benchmark them if you care about performance. You can find plenty of discussion by googling "ext vs xfs vs btrfs" or whichever ones you're considering. They haven't changed that much in the past few years.

load more comments (9 replies)
load more comments
view more: next ›
this post was submitted on 23 Nov 2024
107 points (100.0% liked)

Selfhosted

40708 readers
402 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS