Yeah, I know, that’s a huge advantage in this situation, but not one I can take advantage of 🙂
Yeah, I know, that’s a huge advantage in this situation, but not one I can take advantage of 🙂
Switched to qbittorrent+gluetun side car recently and it’s been pretty good compared to the poorly maintained combo torrent+OpenVPN images I was using. Being able to update my torrent client image/config independent from the VPN client is great. Unfortunately most of the docs are Docker focused so it’s a bit of trial and error to get it setup in a non-docker environment like Kubernetes. Here’s my deployment in case it’s useful for anyone. Be careful that you configure qbittirrent to use “tun0” as it’s network interface or you will be exposed (got pinged by AT&T before I realized that one). I’m sure there’s a more robust way to makeuse of gluetun’s DNS over TLS and iptables kill switch that doesn’t require messing with qbittorrent config to secure, but that’s what I have so far and it works well enough for now.
Look for refurbished units, you can get enterprise grade units for like half the retail price. I recently got a refurbished APC from refurbups.com. Comes with brand new batteries, mostly rack mountable stuff. Ended up being a little over half the price of a brand new one with shipping. Can’t tell at a glance if they ship to Canada, but if not I’d be surprised if there wasn’t a similar Canada based site you could find.
Got a refurbished APC coming in today. Looking forward to not having to worry about my NAS drives or losing internet because or a split second power blip.
Not really, its mostly a hobby/nerdy/because I can thing. I am a software engineer with a decade of experience. The job sometimes requires virtual sys admin work (VM/container, cloud networking, etc). Setting up my own baremetal cluster has given me more insight into how things work, especially on the network side. Most of my peers take for granted that traffic gets in or out of a cluster, but I can actually troubleshoot it or design with it in mind.
I considered it but RAM is very limited on the NAS and the cluster nodes, it’s my primary bottleneck. it would also be more volitile. the two SSDs are RAID 1 redundant, just like the underlying HDDs, in addition to the built in power loss protection on the drives. RAM discs are great if you can spare them and have a UPS though.
Fyi you will not be able to do live video transcoding with a raspberry pi. I overclocked my pi4’s CPU and GPU and it just can’t handle anything but direct play and maybe audio stream transcoding, though I’ve never had luck with any transcoding period. I either download a format I know can direct play or recently started using tdarr (server on pi, node running on my desktop when I need it) to transcode into a direct play format before it hits my Jellyfin library. Even just using my AMD Ryzen 5 (no GPU) it transcodes like 100x faster than a tdarr node given 2 of the rpi cpu cores. You could probably live transcode with a decent CPU (newer Intel CPUs are apparently very good at it) if you run Jellyfin on the NAS but then you’re at odds with your low power consumption goals. Otherwise rpi Jellyfin is great.
Good luck, I’d like to build a NAS myself at some point to replace or supplement my Synology.
If I am trying to fix problems with my cluster or the baremetal hosts they are running on, I can’t rely on the VPN access running on those nodes, which means I need dedicated reliable hardware acting as a bastion. Right now all I have for that is my router. Home routers have awkward limitations for installing and configuring software even if you are running better custom firmware like FreshTomato or OpenWRT, making them an edge case for “just” set up a VPN. Yes I played around with making it work. Yes, I could make it work if I sunk enough effort in to it, but again, I found it acceptably secure to simply enable remote ssh access.
I do suggest talescale all the time for most people though. It’s cool tech, their blog is fantastic. I’m looking forward to having a proper network switch one day and I’ll revisit the issue.
It’s for the chance that I need to administer my cluster when I am not on my LAN. I can set up a port forward to the externally accessible port and everything works as normal like I’m on my LAN. Non-default port, password auth disabled, ssh with root disabled (so you have to have my user and ssh key) and limited ssh connection attempts before ban. I can toggle it on or off with a check box on my router. Yes, I understand there are other ways that are even more secure, yes I understand the risks, but for my circumstances this was a good balance of convenience and security. I’ve also never had an issue :).
I’d start with trying to find aarch64 container images. Search “image name aarch64”. If the source is available you could also build the image yourself, but I’ve never found software I wanted to use badly enough to do that. If you’re lucky someone already did it for you, but these images often aren’t kept up to date. Do the community a favor and drop the owner an issue asking for aarch64 builds if nothing else.
I do as well on a non-standard port, although that doesn’t really provide any extra security. I found ssh only login acceptably secure personally, but it’s definitely less secure than tailscale which can operate with 0 open ports. The risk would be from os/sshd vulnerabilities that can be exploited. As long as you keep the router up to date it should be safe enough.
Get enough experience and you just have a brief moment of stage 3 as you dive straight to stage 4.
Unless it’s a customer/that-one-guy-at-work (it’s a title, but there’s usually a handful of them) and then there’s this vast stage 0 of back and forth of “are you sure that’s happening, run these commands and paste the entire output to me” to be sure of what they are saying then you jump to stage 3/4.
Measure of a Man was pretty early, season 2 maybe? Pretty sure it was before this one. In any case, yeah I had the same thought. How many times has an organic person been taken over and done something terrible? Picard was a Borg, those weird worm things that infiltrated star trek, those ghosts that take over Troi, O’Brian and Data (again!), etc. Lower Decks has an episode where Merriner thinks Boimler’s girlfriend is too hot for him and spends the entire show trying to figure out what kind of creature she is or alien influence she is under. So yeah, common star trek trope.
Presumably cooler Star Fleet heads prevailed and realized this situation with Data was no different so he isn’t inherently any more risky than any other sentient being.
I don’t see how star fleet allowed Data to remain onboard after that one. Being in the tech industry I often feel the Federation’s infosec is lacking in often trivial ways (unless the episode calls for better security of course 🙂), but maybe they have just accepted that sort of thing as the cost of doing space business since it happens all the time. So Data’s benefits out weigh his risk.
My homelab is a 2 node Kubernetes cluster (k3s, raspberry pis), going to scale it up to 4 nodes some day when I want a weekend project.
Built it to learn Kubernetes while studying for CKA/CKD certification for work where I design, implement and maintain service architectures running in Kubernetes/Openshift environments every day. It’s relatively easy for me to manage Kubernetes for my home lab, but It’s a bit heavy and has a steep learning curve if you are new to it which (understandably) puts people off it I think. Especially for homelab/selfhosting use cases. It’s a very valuable (literally $$$) skill if you are in that enterprise space though.