• 4 Posts
  • 68 Comments
Joined 1 year ago
cake
Cake day: June 27th, 2023

help-circle
  • lal309@lemmy.worldtoLinux@lemmy.mlFriendly reminder
    link
    fedilink
    arrow-up
    1
    ·
    11 months ago

    Have you tried to manually specify the subvolume id (sudo btrfs subvolume list /) of the snapshot you want to restore to in /etc/fstab?

    When I was distro hopping I believe Garuda Linux took snapshots and was easily able to restore a few times. Maybe you can reverse engineer what they’ve done???

    I’m running Nobara right now for my gaming setup but I’m half tempted to try TW.





  • Honestly what really matters (imo) is that you do offsite storage. Cloud, a friends house, your parents, your buddy’s NAS, whatever. Just get your data away from your “production/main” site.

    For me, I chose cloud for two main reason. First, convenience. I could use a tool to automate the process of moving data offsite in a reliable manner thus keeping my offsite backups almost identical to my main array and easy retrieval should I need it. Second, I don’t really have family or friends nearby and/or with the hardware to support my need for offsite storage.

    There are lots of pros and cons of each, let alone add your specific needs and circumstances on top of it.

    If you can use the additional drives later on in your main array, some other server or a different purpose then it may be worth while exploring the drives (my concern would be ease of keeping offsite data up to par with main data). If you don’t like it for one reason or the other, you can always repurpose the drives and give cloud storage a try. Again, the important thing is to do it in the first place (and encrypt it client side).


  • Well here’s my very abbreviated conclusion (provided I remember the details appropriately) when I did the research about 3 months ago.

    Wasabi - okay pricing, reliable, s3 compatible, no charges to retrieve my data, pay for 1tb blocks (wasn’t a fan of this one), penalty for data retrieval prior to a “vesting” period (if I remember correctly, you had to leave the data there for 90 days before you could retrieve it at no cost. Also not a big fan of this one)

    AWS - I’m very familiar with it due to my job, pricing is largely influenced by access requirements (how often and how fast do I want to retrieve my data), very reliable, s3, charges for everything (list, read, retrieve, etc). This is the real killer and largely unaccounted cost of AWS.

    Backblaze - okay pricing, reliable, s3 compliant, free retrieval of data up to the same amount that you store with them (read below), pay by the gig (much more flexible than Wasabi). My heartburn with Backblaze was that retrieval stipulation. However, they have recently increased it to free up to 3x of what you store with them which is super awesome and made my heartburn go away really quickly.

    I actually chose Backblaze before the retrieval policy change and it has been rock solid from the start. Works seamlessly with the vast majority of utilities that can leverage s3 compliant storage. Pricing wise, I honestly don’t think it’s that bad

    Hope this helps











  • When you created your containers, did you create a “frontend” and “backend” docker network? Typically I create those two networks (or whatever name you want) and connect all my services (gitlab, Wordpress, etc) to the “backend” network then connect nginx to that same “backend” network (so it can talk to the service containers) but I also add nginx to the “frontend” network (typically of host type).

    What this does is it allows you to map docker ports to host ports to that nginx container ONLY and since you have added nginx to the network that can talk to the other containers you don’t have to forward or expose any ports that are not required (3000 for gitlab) to talk from the outside world into your services. Your containers will still talk to each other through native ports but only within that “backend” network (which does not have forwarded/mapped ports).

    You would want to setup your proxy hosts exactly like you have them in your post except that in your Forward Hostname you would use the container name (gitlab for example) instead of IP.

    So basically it goes like this

    Internet > gitlab.domain.com > DNS points to your VPS > Nginx receives requests (frontend network with mapped ports like 443:443 or 80:80) > Nginx checks proxy hosts list > forwards request to gitlab container on port 3000 (because nginx and gitlab are both in the same “backend” network) > Log in to Gitlab > Code until your fingers smoke! > Drink coffee

    Hope this help!

    Edit: Fix typos