• 6 Posts
  • 155 Comments
Joined 1 year ago
cake
Cake day: June 17th, 2023

help-circle


  • I recently set up a personal Owncast instance on my home server, it should do what you’re looking for. I use OBS Studio to stream random stuff to friends, if your webcam can send RTMP streams it should be able to stream to Owncast without OBS in the middle - else, you just need to set up OBS to capture from the camera and stream to Owncast over RTMP.

    the communication itself should be encrypted

    I suggest having the camera/OBS and Owncast on the same local network as RTMP is unencrypted and could possibly be intercepted between the source and the Owncast server, so make sure it happens over a reasonably “trusted” network. From there, my reverse proxy (apache) serves the owncast instance to the Internet over HTTPS (using let’s encrypt or self-signed certs), so it is encrypted between the server and clients. You can watch the stream from any web browser, or use another player such as VLC pointing to the correct stream address [1]

    it seems that I might need to self-host a VPN to achieve this

    Owncast itself offers no authentication mechanism to watch the stream, so if you expose this to the internet directly and don’t want it public, you’d have to implement authentication at the reverse proxy level (HTTP Basic auth), or as you said you may set up a VPN server (I use wireguard) on the same machine as the Owncast instance and only expose the instance to the VPN network range (with the VPN providing the authentication layer). If you go for a VPN between your phone and owncast server, there’s also no real need to setup HTTPS at the reverseproxy level (as the VPN already provides encryption)

    Of course you should also forward the correct ports (VPN or HTTPS) from your home/ISP router to the server on your LAN.

    There are also dedicated video surveillance solutions.








  • I’m curious why you’re not running your own CA since that seems to be a more seamless process than having to deal with ugly SSL errors for every website

    It’s not, it’s another service to deploy, maintain, monitor, backup and troubleshoot. The ugly SSL warning only appears once, I check the certificate fingerprint and bypass the warning, from there it’s smooth sailing. The certificate is pinned, so if it ever changes I would get a new warning and would know something shady is going on.

    every time you rotate the certificate.

    I don’t really rotate these certs, they have a validity of several years.

    I’m wondering about different the process is between running an ACME server and another daemon/process like certbot to pull certificates from it, vs writing an ansible playbook/simple shell script to automate the rotation of server certificates.

    • Generating self-signed certs is ~40 lines of clean ansible [1], 2 lines of apache config, and one click to get through the self-signed cert warning, once.
    • Obtaining Let’s Encrypt certs is 2 lines of apache config with mod_md and the HTTP-01 challenge. But it requires a domain name in the public DNS, and port forwarding.
    • Obtaining certs from a custom ACME CA is 3 lines of apache config (the extra line is to change the ACME endpoint) and a 100k LOC ACME server daemon running somewhere with its own bugs, documentation, deployment and upgrade management tooling, config quirks… and you still have to manage certs for this service. It may be worth it if you have a lot of clients who don’t want to see the self-signed cert warning and/or worry about their private keys being compromised and thus needing to rotate the certs frequently (you still need to protect the CA key…)

    likely never going to purchase Apple products since I recognise how much they lock down their device

    hear hear

    there are not that many android devices in the US with custom ROM support. With that said, I do plan to root all of my Android devices when KernelSU mature

    I bought a cheap refurbished Samsung, installed LineageOS on it (Europe, but I don’t see why it wouldn’t work in the US?), without root - I don’t really need root, it’s a security liability, and I think the last time I tried Magisk it didn’t work. The only downside is that I have to manually tap Update for F-Droid updates to run (fully unattended requires root).

    I’m currently reading up on how to insert a root and client certificate into Android’s certificate store, but I think it’s definitely possible.

    I did it on that LineageOS phone, using adb push, can’t remember how exactly (did it require root? I don’t know). It works but you get a permanent warning in your notifications telling you that The network might be monitored or something. But some apps would still ignore it.



  • I’m not using a private CA for my internal services, just plain self-signed certs. But if I had to, I would probably go as simple as possible in the first time: generate the CA cert using ansible, use ansible to automate signing of all my certs by the CA cert. The openssl_* modules make this easy enough. This is not very different from my current self-signed setup, the benefit is that I’d only have to trust a single CA certificate/bypass a single certificate warning, instead of getting a warning for every single certificate/domain.

    If I wanted to rotate certificates frequently, I’d look into setting up an ACME server like [1], and point mod_md or certbot to it, instead of the default letsencrypt endpoint.

    This still does not solve the problem of how to get your clients to trust your private CA. There are dozens of different mechanisms to get a certificate into the trust store. On Linux machines this is easy enough (add the CA cert to /usr/local/share/ca-certificates/*.crt, run update-ca-certificates), but other operating systems use different methods (ever tried adding a custom CA cert on Android? it’s painful. Do other OS even allow it?). Then some apps (Web browsers for example) use their own CA cert store, which is different from the OS… What about clients you don’t have admin access to? etc.

    So for simplicity’s sake, if I really wanted valid certs for my internal services, I’d use subdomains of an actual, purchased (more like renting…) domain name (e.g. service-name.internal.example.org), and get the certs from Let’s Encrypt (using DNS challenge, or HTTP challenge on a public-facing server and sync the certificates to the actual servers that needs them). It’s not ideal, but still better than the certificate racket system we had before Let’s Encrypt.



  • get the certificates from Let’s Encrypt manually

    https://httpd.apache.org/docs/2.4/mod/mod_md.html just add MDomain myapp.example.org to your config and it will generate Let’ Encrypt certs automatically

    it’s kind of a pain in the ass every time I add something new.

    You will have to do some reverse proxy configuration every time you add a new app, regardless of the method (RP management GUIs are just fancy GUIs on top of the config file, “auto-discovery” solutions link traefik/caddy require you to add your RP config as docker labels). The way I deal with it, is having a basic RP config template for new applications [1]. Most of the time ProxyPass/ProxyPassReverse is enough, unless the app documentation says otherwise.







  • This is also what I do, weekly. It’s one of the cheapest (cheap SATA drive and USB enclosure, pay once) and most reliable methods, and arguably one of the most secure (the offsite backup drive is also offline most of the time).

    A simple script on my desktop sends a desktop notification reminding me to plug the USB drive, once it’s mounted backups get pulled from my servers to the external disk, then I get a notification to unplug the drive and store it away. There’s about 15 minutes every week where all backups are in the same place. To be extra safe, use 2 drives and rotate them each week.