Yeah, the container I used requires your Steam ID as an environment variable.
Yeah, the container I used requires your Steam ID as an environment variable.
That’s a really open-ended question. Depends purely upon your interests and appetite for risk, etc.
Might be worth looking at, from a Docker perspective:
I have in the past run a Valheim server and a VRising server, too. FWIW.
Searched “tdr” before replying, and was inexplicably happy. :)
I have zero problem with curated or algorithmic timelines. I have a 100% problem when there isn’t a chronology timeline option.
It’s simple really: give me the permanent option of chronological without the dark pattern fuckery of having to reset it periodically, or fuck off forever.
Every time a social media site has offered, pleaded, cajoled or forced me to take a non-chronological timeline, I’ve refused. And if that refusal eventually becomes impossible (no option, addons no longer work, etc), I take my eyeballs elsewhere.
You’re not an edge case. :)
Yeah, it make for a nice workflow, doesn’t it. It doesn’t give you the “fully automated” achievement, but it’s not much of a chore. :)
Have you considered something like borgbackup? It does good deduplication, so you won’t have umpteen copies of unchanged files.
I use it mostly for my daily driver laptop to backup to my NAS, and the Gitlab CE container running on the NAS acts as the equivalent for its local Git repos, which are then straightforward to copy elsewhere. Though haven’t got it scripting anything like bouncing containers or DB dumps.
Agreed. The lack of varied examples in documentation is my common tripping point. When I hate myself, I use visit SarcasmStackOverflow to find examples, and then reference those against the module’s documentation.
And it’s definitely become an easier process as I’ve read more documentation.
Do you have a NAS? It can be a good way to get decent functionality without extra hardware, especially if you’re doing proof of concept or temporary stuff.
My self-hosting Docker setup is split between 12 permanent stacks on a Synology DS920+ NAS (with upgraded RAM) and 4 on a Raspberry Pi 4B, using Portainer and its agent on the Pi to manage them. The NAS is also using Synology’s Drive (like Dropbox or GDrive) and Photos (like Google Photos).
I’ve had the NAS running servers for Valheim and VRising in the past, but they require that fewer containers be running, as game servers running on Linux usually have no optimisation and/or are emulating Windows.
If I decide to host a game server again, I’ll probably look at a NUC. I’ve done the DIY mini-ITX route in the past (for an XBMC-based media centre with HDMI output) and it was great, so that’s another option.
This is what I do. I find keeping 20-odd docker-compose files (almost always static content) backed up to be straightforward.
Each is configured to bring up/down the whole stack in the right order, so any Watchtower-triggered update is seamless. My Gotify container sends me an update every time one changes. I use Portainer to manage them across two devices, but that’s just about convenience.
I disable Watchtower for twitchy containers, and handle them manually. For the rest, the only issue I’ve seen is if there’s a major change in how the container/stack is built (a change in database, etc), but that’s happened twice and I’ve been able to recover.
I used Linuxserver’s Docker container of Dokuwiki when I migrated my notes from Evernote a few years ago. It was easy to setup and configure, has a number of plugins that further improve it, and it did the job really well.
I ended up migrating it all to Obsidian this year, as it serves my needs better, but otherwise I’d still be using Dokuwiki.
I migrated away from Evernote a few years ago, where I kept my “paperless life” (PDFs of receipts, bills, etc) and general notes (work, study, etc). Opting to self-host most of the things I can, I moved the notes to Dokuwiki and the rest to what is now Paperless-ngx.
This year I realised that Obsidian suits my needs better than a wiki, so migrated the notes to that. If it’s just for your stuff, I’d recommend the same. (Though if you collaborate with anyone, I’ve heard Notion is a better option specifically for that.) Obsidian has a lot of extensibility, which will steepen the learning curve, but it’s worth it.
I sync Obsidian’s Vault using my Synology NAS’s “Drive” client, and Obsidian works perfectly with Windows, Mac, Linux, and Android. The only shortcoming is iOS (because iOS), though I believe you can work around it using Obsidian Sync or at least one other tool I’ve seen mentioned. It might also be possible via the Obsidian Git extension, but I’ve not tried it with iOS and requires (from a self-hosting perspective) that you have a local Git server (for example).
It’s a good question. A vault is only as strong as the credentials required to access it.
Bitwarden does have MFA support, though. If you’re using it without that enabled, you’re asking for trouble.
FWIW, I have an LG LED smart TV (2xHDMI, 1xDVB-S2, WiFi, NIC, etc) and it’s only been connected to my network once, for a post-purchase firmware update through my AdGuard Home. WiFi and Ethernet is disabled, and I use it with my Nvidia ShieldTV (Plex*, Netflix, ChromeCast, etc).
I won’t let it go online as I expect it already phones home if you let it, and don’t imagine LG will be able to resist ad injection into content, like Samsung and others do. So it’s an excellent quality dumb TV, which meets my needs perfectly.
*Plex Media Server runs on my NAS. The Shield and my mobile devices are Plex clients.
Exposed is the right term. Other than my Wireguard VPN port, everything I have exposed is HTTPS behind Authelia MFA and SWAG.
I’m tempted to switch Wireguard for Tailscale, as the level of logging with WG has always bothered me. Maybe one day.
When my old NetGear ReadyNAS Duo (2 bays, SPARC, 100Mb NIC) was reaching its EOL I looked into a purpose built server, a mini of some kind (NUC, etc), or a standard QNAP or Synology NAS. Eventually settled on a Synology DS 920+ (4 bays, x86_64, 1Gb NIC).
It’s been rock solid and amazing value for the 2.5 years I’ve had it. It’s running the majority of my Docker containers, Plex Media Server, a Linux VM, and a few other things. It also has its own shell/CLI, which is useful. I don’t use Synology’s “phone home”/remote access stuff, but Synology Drive and Synology Photos are great - they provide the equivalents of Dropbox and Google Photos respectively, and it works across Windows, Linux, Mac, iOS, and Android (via VPN when outside the house). No regrets at all.
I’ve had gitlab/gitlab-ce running on my NAS for 6+ months and it’s been reliable, mostly as a central repository and off-device backup. It has CI/CD and other capabilities (gitlab/gitlab-runner, etc), but I’ve not implemented them.
TT-RSS is fantastic, providing you hold your nose and wear as asbestos suit if you ever dare ask a question or raise a valid issue. The dev is… well, I’m not a fan. I won’t use it out of principle.
FreshRSS is a good-looking and skinnable alternative with a good Docker image, but I had issues with the inability to flush old items. Has a decent web UI.
These days I’m using Sismics and the web UI.
This is what I did, too. Used Pi-Hole for a year or so, and it required regular tinkering and repairing. Planned to test AGH for a short time in Docker container on a Pi4B, and it’s been running that way for 2 years without any issues.
Easier to administer, more functionality and rock solid. I’ve never looked back.
Oops, I think you’re right.
The Honeynet Project, related to the SANS Institute when I last checked, has a lot of resources on honeypots that are worth a look, if you haven’t already.