About a year ago I switched to ZFS for Proxmox so that I wouldn’t be running technology preview.

Btrfs gave me no issues for years and I even replaced a dying disk with no issues. I use raid 1 for my Proxmox machines. Anyway I moved to ZFS and it has been a less that ideal experience. The separate kernel modules mean that I can’t downgrade the kernel plus the performance on my hardware is abysmal. I get only like 50-100mb/s vs the several hundred I would get with btrfs.

Any reason I shouldn’t go back to btrfs? There seems to be a community fear of btrfs eating data or having unexplainable errors. That is sad to hear as btrfs has had lots of time to mature in the last 8 years. I would never have considered it 5-6 years ago but now it seems like a solid choice.

Anyone else pondering or using btrfs? It seems like a solid choice.

Avid Amoeba
link
fedilink
English
261M

You shouldn’t have abysmal performance with ZFS. Something must be up.

Possibly linux
creator
link
fedilink
English
-8
edit-2
1M

What’s up is ZFS. It is solid but the architecture is very dated at this point.

There are about a hundred different settings I could try to change but at some point it is easier to go btrfs where it works out of the box.

Since most people with decently simple setups don’t have the described problem likely somethings up with your setup.

Yes ifta old and yes it’s complicated but it doesn’t have to be to get a decent performance.

Possibly linux
creator
link
fedilink
English
11M

I have been trying to get ZFS working well for months. Also I am not the only one having issues as I have seen lots of other posts about similar problems.

I don’t doubt that you have problems with your setup. Given the large number of (simple) zfs setups that are working flawlessly there are a bound to be a large number of issues to be found on the Internet. People that are discontent voice their opinion more often and loudly compared to the people that are satisfied.

Avid Amoeba
link
fedilink
English
11M

I used to run a mirror for a while with WD USB disks. Didn’t notice any performance problems. Used Ubuntu LTS which has a built-in ZFS module, not DKMS, although I doubt there’s performance problems stemming from DKMS.

Avid Amoeba
link
fedilink
English
61M

What seems dated in its architecture? Last time I looked at it, it struck me as pretty modern compared to what’s in use today.

Possibly linux
creator
link
fedilink
English
0
edit-2
1M

It doesn’t share well. Anytime anything IO heavy happens the system completely locks up.

That doesn’t happen on other systems

Avid Amoeba
link
fedilink
English
2
edit-2
1M

That doesn’t speak much of the architecture. Also it’s really odd. Not denying what you’re seeing is happening, just that it seems odd based on the setups I run with ZFS. My main server is in fact a shared machine that I use as a workstation and games along as a server. All works in parallel. I used to have a mirror, then a 4-disk RAIDz and now an 8-disk RAIDz2. I have multiple applications constantly using the pool. I don’t notice any performance slowdowns on the desktop, or in-game when IO goes high. The only time I notice anything is when something like multiple Plex transcoders hit the CPU hard. Sequential performance is around 1.3GB/s which is limited by the data bus speeds (USB DAS boxes). Random performance is very good although I don’t have any numbers out of my head. I’m using mostly WD Elements shucked disks and a couple of IronWolfs. No enterprise grade disks on this system.

I’m also not saying that you have to keep fucking around with it instead of going Btrfs. Simply adding another anecdote to the picture. If I had a serious problem like that and couldn’t figure it out I’d be on LVMRAID+Ext4 which is what used prior to ZFS.

Possibly linux
creator
link
fedilink
English
01M

Yeah maybe my machines are cursed

Avid Amoeba
link
fedilink
English
31M

That is totally possible. I spent a month changing boards and CPUs to fix a curse on my main, unrelated to storage. In case you’re curious.

Avid Amoeba
link
fedilink
English
11M

I feel like this one flew right over my head. 🥹

I doubt that. Some options:

  • bad memory
  • failing drives
  • silent CPU faults
  • poor power delivery

The list is endless. Maybe BTRFS is more tolerant of the problems you’re facing, but that doesn’t mean the problems are specific to ZFS. I recommend doing a bit of testing to see if everything looks fine on the HW side of things (memtest, smart tests, etc).

Possibly linux
creator
link
fedilink
English
41M

I set the Arc cache to 4GB and it is working better now

You have angered the zfs gods!

Possibly linux
creator
link
fedilink
English
6
edit-2
1M

I have gotten a ton of people to help me. Sometimes it is easier to piss people off to gather info and usage tips.

@jj4211@lemmy.world
link
fedilink
English
21M

You’ve been downvoted, but I’ve seen a fair share of ZFS implementations confirm your assessment.

E.g. “Don’t use ZFS if you care about performance, especially on SSD” is a fairly common refrain in response to anyone asking about how to get the best performance out of their solution.

Create a post

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don’t control.

Rules:

  1. Be civil: we’re here to support and learn from one another. Insults won’t be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it’s not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don’t duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

  • 1 user online
  • 78 users / day
  • 308 users / week
  • 983 users / month
  • 3.73K users / 6 months
  • 1 subscriber
  • 3.91K Posts
  • 79.3K Comments
  • Modlog