• 0 Posts
  • 118 Comments
Joined 1 year ago
cake
Cake day: June 22nd, 2023

help-circle
  • What you describe is true for many file formats, but for most lossy compression systems the “standard” basically only strictly explains how to decode the data and any encoder that produces output that successfully decodes that way is fine.

    And the standard defines a collection of “tools” that the encoders can use and how exactly to use, combine and tweak those tools is up to the encoder.

    And over time new/better combinations of these tools are found for specific scenarios. That’s how different encoders of the same codec can produce very different output.

    As a simple example, almost all video codecs by default describe each frame relative to the previous one (I.e. it describes which parts moved and what new content appeared). There is of course also the option to send a completely new frame, which usually takes up more space. But when one scene cuts to another, then sending a new frame can be much better. A “bad” codec might not have “new scene” detection and still try to “explain the difference” to the previous scene, which can easily take up more space than just sending the entire new frame.




  • Note that just because everything is digital doesn’t mean something like that isn’t necessary: If you depend on your service provider to keep all of your records then you will be out of luck once they … stop liking you, go out of business, have a technical malfunction, decide they no longer want to keep any records older than X years, …

    So even in a all-digital world I’d still keep all the PDF artifacts in something like that.

    And I also second the suggestion of paperless-ngx (even though I’m not using it for very long yet, but it’s working great so far).


  • Ask yourself what your “job” in the homelab should be: do you want to manage what apps are available or do you want to be a DB admin? Because if you are sharing DB-containers between multiple applications, then you’ve basically signed up to checking the release notes of each release of each involved app closely to check for changes like this.

    Treating “immich+postgres+redis+…” as a single unit that you deploy and upgrade together makes everything simpler at the (probably small) cost of requiring some more resources. But even on a 4GB-ram RPi that’s unlikely to become the primary issue soon.


  • There’s many different ways with different performance tradeoffs. for example for my Homeland server I’ve set it up that I have to enter it every boot, which isn’t often. But I’ve also set it up to run a ssh server so I can enter it remotely.

    On my work laptop I simply have to enter it on each boot, but it mostly just goes into suspend.

    One could also have the key on a usb stick (or better use a yubikey) and unplug that whenever is reasonable.


  • Just FYI: the often-cited NIST-800 standard no longer recommends/requires more than a single pass of a fixed pattern to clear magnetic media. See https://nvlpubs.nist.gov/nistpubs/specialpublications/nist.sp.800-88r1.pdf for the full text. In Appendix A “Guidelines for Media Sanitation” it states:

    Overwrite media by using organizationally approved software and perform verification on the
    overwritten data. The Clear pattern should be at least a single write pass with a fixed data value,
    such as all zeros. Multiple write passes or more complex values may optionally be used.

    This is the standard that pretty much birthed the “multiple passes” idea, but modern HDD technology has made that essentially unnecessary (unless you are combating nation-state-sponsored attackers, in which case you should be physically destroying anything anyway, preferably using some high-heat method).



  • That saying also means something else (and imo more important): RAID doesn’t protect against accidental or malicious deletion/modification. It only protects against data loss due to hardware fault.

    If you delete stuff or overwrite it then RAID will dutifully duplicate/mirror/parity-check that action, but doesn’t let you go back in time.

    Thats the same reason why just syncing the data automatically to another target also isn’t the same as a full backup.





  • You’re approaching a relevant part (that big corporations have an overwhelming power advantage in this “negotiation”), but “small artists never use copyright law” is just wrong:

    Without copyright law they couldn’t even sell their content (or more accurately: they could sell it, but the big corp could simply copy it and sell it better/cheaper due to the economics of scale).

    So without copyright the smaller artists would be even more boned than they are right now.




  • IMO copyright as a concept makes sense, but it’s duration should be significantly shortened. In todays short-lived world most works lose the majority of their financial value after a few years (let’s say ~10) anyways. So to allow artists to benefit from their creations while still allowing remixing or reasonably recent content I’d say some sane compromise is necessary.

    Either that or massively expand (and codify) what qualifies as fair use: let anyone reinterpret anything, but don’t allow verbatim copying.