A year ago I set up Ubuntu server with 3 ZFS pools on my server, normally I don’t make copies of very large files but today I was making a copy of a ~30GB directory and I saw in rsync that the transfer doesn’t exceed 3mb/s (cp is also very slow).

What is the best file system that “just works”? I’m thinking of migrating everything to ext4

EDIT: I really like the automatic pool recovery feature in ZFS, has saved me from 1 hard drive failure so far

SayCyberOnceMore
link
fedilink
English
49M

Where are you copying to / from?

Duplicating a folder on the same NAS on the same filesystem? Or copying over the network?

For example, some devices have a really fast file transfer until a buffer files up and then it crawls.

Rsync might not be the correct tool either if you’re duplicating everything to an empty destination…?

@Trincapinones@lemmy.world
creator
link
fedilink
English
19M

Same NAS, same filesystem on an SSD without redundancy

SayCyberOnceMore
link
fedilink
English
19M

Still the same, or has it solved itself?

If it’s lots of small files, rather than a few large ones? That’ll be the file allocation table and / or journal…

A few large files? Not sure… something’s getting in the way.

ptman
link
fedilink
English
99M

How full is your ZFS? ZFS doesn’t handle disk filling and fragmentation well.

@Trincapinones@lemmy.world
creator
link
fedilink
English
49M

Around 70% full with 10% fragmentation

Atemu
link
fedilink
English
29M

At around 70%, fragmentation issues start becoming apparent with ZFS IIRC. Though they shouldn’t be this apparent.

@ikidd@lemmy.world
link
fedilink
English
29M

Use zfs sync instead of rsync. If it’s still slow, it’s probably SMR drives.

Morethanevil
link
fedilink
English
279M

Ext4 does not have snapshots, COW or similar features. I am very happy with BTRFS. It just “works” out of the box.

FWIW lvm can give you snapshots and other features. And mdadm can be used for a raid. All very robust tools.

Morethanevil
link
fedilink
English
69M

Yes but BTRFS can this out of the box without extra tools. Both ways have their own advantage, but I would still prefer BTRFS

@Paragone@lemmy.world
link
fedilink
English
49M

I’m in BTRFS, and wish I wasn’t.

Booting into a failed mdadm RAID1 is normal,

whereas booting into a failed BTRFS RAID1 requires competent manual intervention, and special parameters given to the boot-kernel.

mdadm & lvm, with a fixed version of ZFS would be my preference.

ZFS recently had a bug discovered that was silently corrupting data, and I HOPE a fix has been got in.

Lemme see if I can find something on both of these points…


https://linuxnatives.net/2015/using-raid-btrfs-recovering-broken-disks

https://www.theregister.com/2023/11/27/openzfs_2_2_0_data_corruption/


_ /\ _

Morethanevil
link
fedilink
English
29M

Never had problems, but I wish you all the best for your ZFS problem 🤗

@Fisch@lemmy.ml
link
fedilink
English
149M

I use BTRFS on everything too nowadays. The thing that made me switch everything to BTRFS was filesystem compression.

Morethanevil
link
fedilink
English
29M

Yes compression is cool. Zstd level 3 to 6 is very quick too 😋

@Fisch@lemmy.ml
link
fedilink
English
29M

I use zstd too, didn’t specifiy a level tho, so it’s just using the default. I only use like ⅔ of the disk space I used before and I don’t feel any difference in performance at all.

@TCB13@lemmy.world
link
fedilink
English
39M

Yes and BTRFS, unlike Ext4, will not go corrupt on the first power outage of slight hardware failure.

Wut? Ext4 is quite reliable.

@devfuuu@lemmy.world
link
fedilink
English
29M

Corruption on power only regularly happened to me on xfs a few years ago. That made me swear to never use that fs ever again. Never seen it on my ext4fs systems which are all I have for years in multiple computers.

Possibly linux
link
fedilink
English
39M

I’ve run btrfs for years and never had a issue. They one time my system wouldn’t boot it was due to a bad drive. I just swapped the drive and rebalanced and I was back up and running in less than a half an hour.

@Eideen@lemmy.world
link
fedilink
English
19M

This will also happen to Ext4. You just wouldn’t know it.

@TCB13@lemmy.world
link
fedilink
English
29M

I’m confused with your answer. BTRFS is good and reliable. Ext4 gets fucked at the slightest issue.

SayCyberOnceMore
link
fedilink
English
19M

Never had an issue with EXT4.

Had a problem on a NAS where BTRFS was taking “too long” for systemD to check it, so just didn’t mount it… bit of config tweaking and all is well again.

I use EXT* and BTRFS where ever I can because I can manipulate it with standard tools (inc gparted).

I have 1 LVM system which was interesting, but I wouldn’t do it that way in the future (used to add drives on a media PC)

And as for ZFS … I’d say it’s very similar to BTRFS, but just slightly too complex on Linux with all the licensing issues, etc. so I just can’t be bothered with it.

As a throw-away comment, I’d say ZFS is used by TrusNAS (not a problem, just sayin’…) and… that’s about it??

As to the OPs original question, I agree with the others here… something’s not right there, but it’s probably not the filesystem.

@Eideen@lemmy.world
link
fedilink
English
09M

Yes both BTRFS and Ext4 are vulnerable to unplanned powerloss when writes are in flight. Commonly knows as a write hole.

For BTRFS since it use of Copy of Write, it is more vulnerable. As metadata needs to be updated and more. Ext4 does not have CoW.

@TCB13@lemmy.world
link
fedilink
English
1
edit-2
9M

For BTRFS since it use of Copy of Write, it is more vulnerable. As metadata needs to be updated and more. Ext4 does not have CoW.

This is where theory and practice diverge and I bet a lot of people here will essentially have the same experience I have. I will never run an Ext filesystem again, not ever as I got burned multiple times both at home/homelab and at the datacenter with Ext shenanigans. BTRFS, ZFS, XFS all far superior and more reliable.

@Eideen@lemmy.world
link
fedilink
English
19M

I run BTRFS my self.

And I agree BTRFS , is superior.

Atemu
link
fedilink
English
39M

Ext4 does not have CoW.

That’s the only true part of this comment.

As for everything else:

Ext4 uses journaling to ensure consistency.

btrfs’ CoW makes it resistant to that issue by its nature; writes go elsewhere anyways, so you can delay the “commit” until everything is truly written and only then update the metadata (using a similar scheme again).

Please read https://en.wikipedia.org/wiki/Journaling_file_system.

@Eideen@lemmy.world
link
fedilink
English
19M

BTRFS is currently not Journaling

https://lore.kernel.org/linux-btrfs/20220513113826.GV18596@twin.jikos.cz/T/#m46f1e018485e6cb2ed42602defee5963ed8c2789

Qu Wenruo did a write up on some of the edge cases. Partial write being one of them.

Atemu
link
fedilink
English
29M

What you just posted concerns the experimental RAID5/6 mode which, unlike all other block group modes, did not have CoW’s inherent safety.

As it stands, there is no stable RAID5/6 support in btrfs. If we’re talking about non-experimental usage of btrfs, it is irrelevant.

Possibly linux
link
fedilink
English
-29M

ZFS will perform better on a NAS

ZFS is by far the best just use TrueNAS, Ubuntu is crap at supporting ZFS, also only set your pool’s VDEV 6-8 wide.

@Trincapinones@lemmy.world
creator
link
fedilink
English
19M

I was thinking about switching to debian (all that I host is in docker so that’s why), but the weird thing is that it was working perfectly 1 month ago

Maybe your HBA is having issues? or a Drive is Failing? have you done a memtest? you may need to do system wide tests, it can even be a PSU failing or a software Bug.

also TrueNAS is built with Docker they use it heavily something like 106 apps, Debian has good ZFS support, but you will end up doing a lot of unneeded work using Debian unless you keep it simple.

@nezbyte@lemmy.world
link
fedilink
English
5
edit-2
9M

MergerFS + Snapraid is a really nice way to turn ext4 mounts into a single entry point NAS. OpenMediaVault has some plugins for setting this up. Performance wise it will max out the drive of whichever one you are using and you can use cheap mismatched drives.

Kata1yst
link
fedilink
36
edit-2
9M

ZFS is a very robust choice for a NAS. Many people, myself included, as well as hundreds of businesses across the globe, have used ZFS at scale for over a decade.

Attack the problem. Check your system logs, htop, zpool status.

When was the last time you ran a zpool scrub? Is there a scrub, or other zfs operation in progress? How many snapshots do you have? How much RAM vs disk space? Are you using ZFS deduplication? Compression?

@Trincapinones@lemmy.world
creator
link
fedilink
English
3
edit-2
9M

I don’t even know what a zpool scrub is lol, do you have some resources to learn more about ZFS? 1TB pool and 2 500GB pools, with 32GB of RAM, No deduplication and LZ4 compression

Kata1yst
link
fedilink
79M

Yeah, you should be scrubbing weekly or monthly, depending on how often you are using the data. Scrub basically touches each file and checks the checksums and fixes any errors it finds proactively. Basically preventative maintenance.
https://manpages.ubuntu.com/manpages/jammy/man8/zpool-scrub.8.html

Set that up in a cron job and check zpool status periodically.

No dedup is good. LZ4 compression is good. RAM to disk ratio is generous.

Check your disk’s sector size and vdev ashift. On modern multi-TB HDDs you generally have a block size of 4k and want ashift=12. This being set improperly can lead to massive write amplification which will hurt throughput.
https://www.high-availability.com/docs/ZFS-Tuning-Guide/

How about snapshots? Do you have a bunch of old ones? I highly recommend setting up a snapshot manager to prune snapshots to just a working set (monthly keep 1-2, weekly keep 4, daily keep 6 etc) https://github.com/jimsalterjrs/sanoid

And to parrot another insightful comment, I also recommend checking the disk health with SMART tests. In ZFS as a drive begins to fail the pool will get much slower as it constantly repairs the errors.

@Trincapinones@lemmy.world
creator
link
fedilink
English
19M

Wow that’s a lot of info, thank you!

@BobsAccountant@lemmy.world
link
fedilink
English
2
edit-2
9M

Adding on to this:

These are all great points, but I wanted to share something that I wish I’d known before I spun up my array… The configuration of your array matters a lot. I had originally chosen to use RAIDZ1 as it’s the most efficient with capacity while still offering a little fault tolerance. This was a mistake, but in my defense, the hard data on this really wasn’t distributed until long after I had moved my large (for me) dataset to the array. I really wish I had gone with a Striped Mirror configuration. The benefits are pretty overwhelming:

  • Performance is better than even RAIDZ2, especially as individual disk size increases.
  • Fault tolerance is better as you could have up to 50% of the disks fail, so long as one disk in a mirrored set remains functional.
  • Fault recovery is better. With traditional arrays with distributed chunks, you have to resilver (rebuild) the entire array, requiring more time, costing performance and shortening the life of the unaffected drives.
  • You can stripe mismatched sets of mirrored drives, so long as the mirrored set is identical, without having the array default to the size of the smallest member. This allows you to grow your array more organically, rather than having to replace every drive, one at a time, resilvering after each change.

Yes, you pay for these gains with less usable space, but platter drives are getting cheaper and cheaper, the trade seems more worth it than ever. Oh and I realize that it wasn’t obvious, but I am still using ZFS to manage the array, just not in a RAIDZn configuration.

@Trincapinones@lemmy.world
creator
link
fedilink
English
1
edit-2
9M

Thanks for all the help!

I don’t have any redundancy, my system has an SSD (the one being slow) and 2 500Gb HDDs, in the hdds I only have movies and shows so I don’t care is that goes bad.

I have a lot of important personal stuff in the SSD but is new (6 months old) from crucial and I trust that because I don’t have the money to spare on another drive (+ electricity bills) and I trust that I’ll only lose 1-2 files if it goes bad because of the ZFS protection

@Trincapinones@lemmy.world
creator
link
fedilink
English
19M

deleted by creator

@PlexSheep@feddit.de
link
fedilink
English
69M

I host my array of HDD drives with btrfs, works well and is Linux native.

Lemmy Tagginator
bot account
link
fedilink
09M

deleted by creator

Most filesystems should “just work” these days.

Why are you blaming the filesystem here when you haven’t ruled out other issues yet? If you have a drive failing a new FS won’t help. Check out “smartctl” to see if it reports errors in your drives.

KptnAutismus
link
fedilink
English
39M

they may be using really slow hard drives or an SSD without DRAM.

or maybe a shitty network switch?

maybe the bandwidth is used up by a torrent box?

there’s a lot of possible causes.

@Merlin404@lemmy.world
link
fedilink
English
-19M

That ive learnt the hard way it dosent 😅 have a Ubuntu server with unifi network in it, thats now full in inodes 😅 the positive thing, im forced to learn a lot in Linux 😂

Possibly linux
link
fedilink
English
49M

ZFS should have better performance if you set it up correctly.

That’s exactly their gripe: out of the box performance.

Possibly linux
link
fedilink
English
-19M

If you set it up correctly

@Trincapinones@lemmy.world
creator
link
fedilink
English
19M

I’ll try to know more about ZFS and I’ll do it better next time, I see a lot of people pro ZFS so it should be good

That’s, by the very definition, not out of the box.

Make sure you don’t have SMR drives, if they are spinning drives. CMR drives are the I ly ones that should be used in a NAS, especially with ZFS. https://vermaden.wordpress.com/2022/05/08/zfs-on-smr-drives/

@Trincapinones@lemmy.world
creator
link
fedilink
English
19M

It’s an SSD, that’s what worries me the most

@Moonrise2473@feddit.it
link
fedilink
English
29M

From the article it looks like zfs is the perfect file system for smr drives as it would try to cache random writes

Possibly, with tuning. Op would just have to be careful about reslivering. In my experience SMR drives really slow down when the CMR buffer is full.

@Decronym@lemmy.decronym.xyz
bot account
link
fedilink
English
8
edit-2
9M

Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I’ve seen in this thread:

Fewer Letters More Letters
LVM (Linux) Logical Volume Manager for filesystem mapping
NAS Network-Attached Storage
PSU Power Supply Unit
SSD Solid State Drive mass storage
ZFS Solaris/Linux filesystem focusing on data integrity

5 acronyms in this thread; the most compressed thread commented on today has 9 acronyms.

[Thread #486 for this sub, first seen 5th Feb 2024, 15:05] [FAQ] [Full list] [Contact] [Source code]

XFS has “just worked” for me for a very long time now on a variety of servers and desktop systems.

@Damage@feddit.it
link
fedilink
English
59M

Possibly linux
link
fedilink
English
19M

Careful as it is obscure enough that you could blow off your leg.

Atemu
link
fedilink
English
49M

I don’t see how the default filesystem of the enterprise Linux distro could be considered obscure.

Possibly linux
link
fedilink
English
19M

I don’t believe that XFS is the default for anything these days. I could be wrong though.

Atemu
link
fedilink
English
1
edit-2
9M

Default since RHEL 8. Consider looking up such facts before posting wrong facts.

@matze@programming.dev
link
fedilink
English
19M

Oh i didn’t know that. RHEL 9 also uses it as defalut. Propably some forks of it aswell. Rocky, Alma?

@matze@programming.dev
link
fedilink
English
19M

Oh i mixed it up with ZFS. I think ZFS uses no one by default.

@Damage@feddit.it
link
fedilink
English
19M

ZFS is default on Proxmox

AggressivelyPassive
link
fedilink
English
169M

3mb/s sounds more like there is something else going on.

@Trincapinones@lemmy.world
creator
link
fedilink
English
39M

Yeah, but I don’t know how to diagnose it…

AggressivelyPassive
link
fedilink
English
39M

You could try to redo the copy and monitor the system in htop, for example. Maybe there’s a memory or CPU bottleneck. Maybe one of your drives is failing, maybe you’ve got a directory with tons of very small files, which causes a lot of overhead.

Yes, file size, drive types, the amount of RAM in the server, in the source and destination of the operation, can all have an effect on Performance. But generally if he’s moving within the same pool, it should be pretty quick.

Create a post

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don’t control.

Rules:

  1. Be civil: we’re here to support and learn from one another. Insults won’t be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it’s not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don’t duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

  • 1 user online
  • 124 users / day
  • 419 users / week
  • 1.16K users / month
  • 3.85K users / 6 months
  • 1 subscriber
  • 3.68K Posts
  • 74.2K Comments
  • Modlog