• 4 Posts
  • 301 Comments
Joined 1 year ago
cake
Cake day: June 1st, 2023

help-circle


  • If we only ever act on things we think we got 100% nailed down, we will either be as ignorant as these fools who locked Semmelweis away or we will stop doing anything at all, because realistically there is always a chance we got some seemingly basic understanding wrong.

    The only intelligent thing is to work with a good mix of “what you know” paired with a sane amount of “critical thinking” and an assessment of potentially involved risks.

    Covid was also an example (at least here in Germany). People fought against the invonvenience of having to wear masks or stay inside (or get vaccinated) because (as they said) we don’t know for certain how dangerous the illness really is and/or how effectice these measures are.

    For me the calculation was simple: doing these measures and being wrong has far far less fatal consequences than being wrong and not doing these measures.












  • For fileservers ZFS (and by extension btrfs) have a clear advantage. The main thing is, that you can relatively easily extend and section off storage pools. For ext4 you would need LVM to somewhat achieve something similar, but it’s still not as mighty as what ZFS (and btrfs) offer out of the box.

    ZFS also has a lot of caching strategies specifically optimized for storage boxes. Means: it will eat your RAM, but become pretty fast. That’s not a trade-off you want on a desktop (or a multi purpose server), since you typically also need RAM for applications running. But on a NAS, that is completely fine. AFAIK TrueNAS defaults to ZFS. Synology uses btrfs by default. Proxmox runs on ZFS.



  • As with every software/product: they have different features.

    ZFS is not really hip. It’s pretty old. But also pretty solid. Unfortunately it’s licensed in a way that is maybe incompatible with the GPL, so no one wants to take the risk of trying to get it into Linux. So in the Linux world it is always a third-party-addon. In the BSD or Solaris world though …

    btrfs has similar goals as ZFS (more to that soon) but has been developed right inside the kernel all along, so it typically works out of the box. It has a bit of a complicated history with it’s stability/reliability from which it still suffers (the history, not the stability). Many/most people run it with zero problems, some will still cite problems they had in the past, some apparently also still have problems.

    bcachefs is also looming around the corner and might tackle problems differently, bringing us all the nice features with less bugs (optimism, yay). But it’s an even younger FS than btrfs, so only time will tell.

    ext4 is an iteration on ext3 on ext2. So it’s pretty fucking stable and heavily battle tested.

    Now why even care? ZFS, btrfs and bcachefs are filesystems following the COW philisophy (copy on write), meaning you might lose a bit performance but win on reliability. It also allows easily enabling snapshots, which all three bring you out of the box. So you can basically say “mark the current state of the filesystem with tag/label/whatever ‘x’” and every subsequent changes (since they are copies) will not touch the old snapshots, allowing you to easily roll back a whole partition. (Of course that takes up space, but only incrementally.)

    They also bring native support for different RAID levels making additional layers like mdadm unnecessary. In case of ZFS and bcachefs, you also have native encryption, making LUKS obsolete.

    For typical desktop use: ext4 is totally fine. Snapshots are extremely convenient if something breaks and you can basically revert the changes back in a single command. They don’t replace a backup strategy, so in the end you should have some data security measures in place anyway.

    *Edit: forgot a word.



  • One problem is that they need to put a price tag and therefore a timeline on such a project. Due to the complexity and the many unknown unknowns in theses decades worth of accumulated technical debts, no one can properly estimate that. And so these projects never get off and typically die during planning/evaluation when both numbers (cost and time) climb higher and higher the longer people think about it.

    IMO a solution would be to do it iteratively with a small team and just finish whenever. Upside: you have people who know the system inside-out at hand all the time should something come up. Downside of course is that you have effectively no meaningful reporting on when this thing is finished.



  • To execute more than one process, you need to explicitly bring along some supervisor or use a more compicated entrypoint script that orchestrates this. But most container images have a simple entrypoint pointing to a single binary (or at most running a script to do some filesystem/permission setup and then run a single process).

    Containers running multiple processes are possible, but hard to pull off and therefore rarely used.

    What you likely think of are the files included in the images. Sure, some images bring more libs and executables along. But they are not started and/or running in the background (unless you explicitly start them as the entrypoint or using for example docker exec).