About a year ago I switched to ZFS for Proxmox so that I wouldn’t be running technology preview.

Btrfs gave me no issues for years and I even replaced a dying disk with no issues. I use raid 1 for my Proxmox machines. Anyway I moved to ZFS and it has been a less that ideal experience. The separate kernel modules mean that I can’t downgrade the kernel plus the performance on my hardware is abysmal. I get only like 50-100mb/s vs the several hundred I would get with btrfs.

Any reason I shouldn’t go back to btrfs? There seems to be a community fear of btrfs eating data or having unexplainable errors. That is sad to hear as btrfs has had lots of time to mature in the last 8 years. I would never have considered it 5-6 years ago but now it seems like a solid choice.

Anyone else pondering or using btrfs? It seems like a solid choice.

  • catloaf@lemm.ee
    link
    fedilink
    English
    arrow-up
    2
    ·
    3 days ago

    Meh. I run proxmox and other boot drives on ext4, data drives on xfs. I don’t have any need for additional features in btrfs. Shrinking would be nice, so maybe someday I’ll use ext4 for data too.

    I started with zfs instead of RAID, but I found I spent way too much time trying to manage RAM and tuning it, whereas I could just configure RAID 10 once and be done with it. The performance differences are insignificant, since most of the work it does happens in the background.

    You can benchmark them if you care about performance. You can find plenty of discussion by googling “ext vs xfs vs btrfs” or whichever ones you’re considering. They haven’t changed that much in the past few years.

    • WhyJiffie@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      2 days ago

      but I found I spent way too much time trying to manage RAM and tuning it,

      I spent none, and it works fine. what was your issue?

      • catloaf@lemm.ee
        link
        fedilink
        English
        arrow-up
        2
        ·
        3 days ago

        I have four 6tb data drives and 32gb of RAM. When I set them up with zfs, it claimed quite a few gb of RAM for its cache. I tried allocating some of the other NVMe drive as cache, and tried to reduce RAM usage to reasonable levels, but like I said, I found that I was spending a lot of time fiddling instead of just configuring RAID and have it running just fine in much less time.

        • MangoPenguin@lemmy.blahaj.zone
          link
          fedilink
          English
          arrow-up
          2
          ·
          3 days ago

          You can ignore the RAM usage, it’s just cache. It uses up to half your RAM by default but if other things need it zfs will just clear RAM for that to happen.

          • catloaf@lemm.ee
            link
            fedilink
            English
            arrow-up
            2
            ·
            3 days ago

            That might be what was supposed to happen, but when I started up the VMs I saw memory contention.