Ah okay well I appreciate the response anyways. I’m also struggling to figure how to snapshot my /home since I put it in a different partition during install. Timeshift is “unable to see it”.
Ah okay well I appreciate the response anyways. I’m also struggling to figure how to snapshot my /home since I put it in a different partition during install. Timeshift is “unable to see it”.
How are you taking the snapshot automatically?
OnlyOffice is the only one that I’ve used that has a good looking UI, works out of the box and very good compatibility (across Microsoft and other document standards). Install is just one flatpak away. Highly recommend.
Honestly what really matters (imo) is that you do offsite storage. Cloud, a friends house, your parents, your buddy’s NAS, whatever. Just get your data away from your “production/main” site.
For me, I chose cloud for two main reason. First, convenience. I could use a tool to automate the process of moving data offsite in a reliable manner thus keeping my offsite backups almost identical to my main array and easy retrieval should I need it. Second, I don’t really have family or friends nearby and/or with the hardware to support my need for offsite storage.
There are lots of pros and cons of each, let alone add your specific needs and circumstances on top of it.
If you can use the additional drives later on in your main array, some other server or a different purpose then it may be worth while exploring the drives (my concern would be ease of keeping offsite data up to par with main data). If you don’t like it for one reason or the other, you can always repurpose the drives and give cloud storage a try. Again, the important thing is to do it in the first place (and encrypt it client side).
Well here’s my very abbreviated conclusion (provided I remember the details appropriately) when I did the research about 3 months ago.
Wasabi - okay pricing, reliable, s3 compatible, no charges to retrieve my data, pay for 1tb blocks (wasn’t a fan of this one), penalty for data retrieval prior to a “vesting” period (if I remember correctly, you had to leave the data there for 90 days before you could retrieve it at no cost. Also not a big fan of this one)
AWS - I’m very familiar with it due to my job, pricing is largely influenced by access requirements (how often and how fast do I want to retrieve my data), very reliable, s3, charges for everything (list, read, retrieve, etc). This is the real killer and largely unaccounted cost of AWS.
Backblaze - okay pricing, reliable, s3 compliant, free retrieval of data up to the same amount that you store with them (read below), pay by the gig (much more flexible than Wasabi). My heartburn with Backblaze was that retrieval stipulation. However, they have recently increased it to free up to 3x of what you store with them which is super awesome and made my heartburn go away really quickly.
I actually chose Backblaze before the retrieval policy change and it has been rock solid from the start. Works seamlessly with the vast majority of utilities that can leverage s3 compliant storage. Pricing wise, I honestly don’t think it’s that bad
Hope this helps
I’m currently using Backblaze. I also researched Wasabi and AWS.
Can’t speak for those but I tried Kopia and it did the job okay. Ultimately tho I landed on rclone.
Lots of answers in the comment about this particular storage type/vendor. Regardless, to answer your original question, rclone. Hands down. If you spend 30-60 minutes actually reading their documentation, you are set and understand so much more of what’s going on under the hood.
Didn’t even know
Absolutely agree! Just pointing it out in case OP runs into a registrar that doesn’t offer this
Fair point. I failed to mentioned features in my previous comment. Things like WHOIS Privacy are essential to me and I imagine it is for most of us (self hosters)
In my opinion it really comes down to support, price (first year and renewal) and ethics.
For the ethics piece, if you think Google is an evil company then avoid Google Domains, as an example.
Did not know about this one! Just added it to my pi hole instance. Thank you!
You got it! As long as nginx can reach that service container, it will forward the request to it.
service1.example.com is configured in nginx with a proxy host of service1:1234, service2.example.com is proxied to service2:8080 and so on.
When you created your containers, did you create a “frontend” and “backend” docker network? Typically I create those two networks (or whatever name you want) and connect all my services (gitlab, Wordpress, etc) to the “backend” network then connect nginx to that same “backend” network (so it can talk to the service containers) but I also add nginx to the “frontend” network (typically of host type).
What this does is it allows you to map docker ports to host ports to that nginx container ONLY and since you have added nginx to the network that can talk to the other containers you don’t have to forward or expose any ports that are not required (3000 for gitlab) to talk from the outside world into your services. Your containers will still talk to each other through native ports but only within that “backend” network (which does not have forwarded/mapped ports).
You would want to setup your proxy hosts exactly like you have them in your post except that in your Forward Hostname you would use the container name (gitlab for example) instead of IP.
So basically it goes like this
Internet > gitlab.domain.com > DNS points to your VPS > Nginx receives requests (frontend network with mapped ports like 443:443 or 80:80) > Nginx checks proxy hosts list > forwards request to gitlab container on port 3000 (because nginx and gitlab are both in the same “backend” network) > Log in to Gitlab > Code until your fingers smoke! > Drink coffee
Hope this help!
Edit: Fix typos
Well…. you just blocked off my calendar for the weekend!
I got ya. Took a quick look at that link and it looks like the client is Windows specific which is frowned upon and permanently blacklisted in this house!!!
Still, I appreciate the reply
I’ve been toying with the idea of standing it up. Any recommendations for the GUI side?
Sad indeed. Maybe raising an issue on GitHub? Even if you don’t end up using cloudbeaver, it’s worth reporting it. Maybe they don’t know there’s a problem with this component of their app.
Have you tried to manually specify the subvolume id (sudo btrfs subvolume list /) of the snapshot you want to restore to in /etc/fstab?
When I was distro hopping I believe Garuda Linux took snapshots and was easily able to restore a few times. Maybe you can reverse engineer what they’ve done???
I’m running Nobara right now for my gaming setup but I’m half tempted to try TW.