I’ve never had so much fun self-hosting. A decade or so ago I was hosting things on Linode and running all kinds of servers for myself but with the rise of cloud services, I favored just giving everything to Google. I noticed how popular this community was on Reddit/Lemmy and now it’s my new addiction.

I’m a software engineer and have plenty of experience deploying to AWS/GCP so my head has been buried in the sand with these cloud providers. Now that I’m looking around there are things like NextCloud, Pihole, and Portainer all set up with Cloudflare Zero Trust… I feel like I’m living the dream of having the convenience to deploy my own services with proper authentication and it’s so much fun.

Reviving old hardware to act as local infra is so badass it feels great turning on old machines that were collecting dust. I’m now trying to convince my brother to participate in doing hard-drive swaps on a monthly basis so I have some backup redundancy off-site without needing to back up to the cloud.

Sorry if this feels ranty but I just can’t get over how awesome this is and I feel like a kid again. Cheers to this awesome community!

EDIT: Just also found Fission and OpenFaaS, selfhosted serverless functions, I’m jumping with joy right now!

  • zebus@kbin.social
    link
    fedilink
    arrow-up
    6
    ·
    1 year ago

    Yea between the enshitificaiton of the internet and how far selfhost software has come it is a great time to selfhost and will just keep getting better.

    Selfhosting, reddit drama, kbin, all this just makes it seem like the internet is having a sort of grassroots, back to basics movement which I’m all for lol.

  • AusatKeyboardPremi@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    1 year ago

    Saw this post on “All”. Last I checked (sometime in 2019), self-hosting was a fairly involved process.

    Has the process simplified enough for a complete beginner like me to begin self-hosting services on, say, a raspberry pi?

    If yes, can you please point me to a good resource/wiki?

    • dustojnikhummer@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Actually, I would argue the simplest way to self host today is TrueCharts.

      The problem is when it breaks, you are SOL because you didn’t build it yourself so you got no clue how it works

    • zebus@kbin.social
      link
      fedilink
      arrow-up
      0
      ·
      1 year ago
      1. Follow docker install guide for raspi
      2. Browse awesome-selfhosted and find services that seem interesting to you or ask for recs here.
      3. Follow the projects guide to do a docker install
      4. (Bonus) Setup a reverse proxy like nginx proxy manager so you can access your services with urls
      5. (Bonus) Setup domain and a service such as Tailscale so you can access your services safely from outside your home.
      • AusatKeyboardPremi@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        1 year ago

        Thanks for the steps!

        I remember steps 4 & 5 were the ones that made me drop the idea. It involved a lot of configuration.

        I will take a look once again, hopefully these have become simple enough.

        • Rising5315@kbin.social
          link
          fedilink
          arrow-up
          1
          ·
          1 year ago

          I set up wireguard vpn and took down all my reverse proxies as it feels more secure and is easier to maintain.

          From what I’ve heard tailscale is a step easier as well. So you could vpn into your network rather than accessing the services via URL.

        • zebus@kbin.social
          link
          fedilink
          arrow-up
          1
          ·
          1 year ago

          Np, I would say dm me if you have any questions but I dunno if you can message between lemmy and kbin haha

      • Kaldo@kbin.social
        link
        fedilink
        arrow-up
        0
        ·
        1 year ago

        Honestly I’ve never used docker properly and one time I tried for the *arr stack I ran into many issues with access to storage drives and connectivity between different services. Does it actually help with anything on rpi? I thought it’s good enough to just install the rpi OS and then install other services normally on it?

        • zebus@kbin.social
          link
          fedilink
          arrow-up
          1
          ·
          edit-2
          1 year ago

          Nope, do whatever suits you!

          I would say tho the example you made is one of the infamous cases where docker is more difficult to setup than without due to the file locations of your movies, etc needing to match between dockers. When I set it up I found a really good guide that not only explained how to set it up but they also explained the logic and reason behind the issue.

          https://wiki.servarr.com/docker-guide#consistent-and-well-planned-paths

          Another good guide about the issue:
          https://trash-guides.info/Hardlinks/How-to-setup-for/Docker/

          The reason I’d initially recommend docker to a beginner is it keeps everything clean and organized, it’s easy to undo mistakes while learning, and I feel some apps are easier to setup with docker because they come with the dependencies already installed and configured properly.

  • palitu@lemmy.perthchat.org
    link
    fedilink
    English
    arrow-up
    3
    ·
    1 year ago

    And it can get really low effort too.

    I do very little maintenance as I just don’t have time at the moment. Everything just runs.

    I love paperless, immich and meanie as my top apps, with nginx proxy manager dealing with the proxying.

  • cnk@kbin.dk
    link
    fedilink
    arrow-up
    2
    ·
    1 year ago

    Absolutely! I’ve been enjoying it a lot too. Hosting Mastodon, Matrix, kbin, and a couple of game servers now from my basement 🙂

  • burak@lemmy.ml
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Noob here in terms of self-hosting. How do you self-host multiple apps? Wouldn’t it get unmanageable at some point?

    • Alexffjeg@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Yeah, if you hosted them all as installed services then it would be pretty hard to manage, but if you’re running them as containers and have some management software its easy. I have a very simple setup with portainer and docker-compose and it’s no problem for me to manage about 10 services. I don’t think I’ll be adding more in the near future, but even if I would, it still wouldn’t be a problem.

  • blackstrat@lemmy.fwgx.uk
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    1 year ago

    If you want to host things and be accessible from outside your home then I’d start with getting a domain and static IP, point the DNS at your IP, make sure your DNS provider is supported by Let’s Encrypt DNS authentication.

    Then setup nginx as a reverse proxy and get Let’s Encrypt setup with auto renewal. That way you can have secure https connections to your home.

    Then install docker compose, fire up a service and configure nginx to proxy to it

    • Spike@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 year ago

      I usually cut down on domain/DNS cost by using a free dynamic DNS service called duckdns. It works super well, provides Let’s Encrypt support and sub-sub-domains. (for example your could have https://git.$username.duckdns.org)

      I’ve found as a IT noob that Caddy 2 provides as much “batteries inside” and “boiler-plate free” to support me, because I have no Idea what I’m doing. So I just let caddy handle my encryption and reverse proxy to my actual server.

      I’m an embedded software dev, who only discovers ethernet protocols on a surface level, because we hadn’t need it yet in previous projects, so I’m a bit lost on how to do cloud stuff. So having all these great tools for free for me to try out and connect from outside to my media servers and stuff is awesome!

      • SpaceAape@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        My old cheap Asus N66u router has a free dyndns service built-in. Super easy to setup. I use it to host a jellyfin setup. Bout to setup a torrent server and a NextCloud server. Used to run a owncloud server a few years back and loved having it.

  • Tired8281@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Docker is hurting my progress. I just can’t seem to wrap my head around it. Is there a Docker for Dummies?

    • Karlmit@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      I learned the basics of docker by using synology and unraid. They make it really easy setting up docker apps.

    • BetterNotBigger@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Are you having trouble learning it or understanding what it’s used for? Much of learning Docker also comes with understanding some basics of software deployment like environment variables, ports and volumes. Happy to help answer any questions because it’s an extremely powerful tool once it starts clicking.

    • pontiffkitchen0@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 year ago

      Is there a specific part that you’re having trouble with? Is it more how it works under the hood, or more about using it to spin up containers? I can try to answer any questions and post some how tos for you.

      • Tired8281@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 year ago

        I think I just need a general overview. Something about the concept isn’t clicking for me, and it makes it hard for me to learn how to use it when I fundamentally don’t get it. Is there a really good “Introduction to Docker and the tools people use with it” that I haven’t found?

        • Glitchington@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 year ago

          Say you install with apt, and the app needs a dependency that breaks your setup. You use docker to utilize your os, but containerize dependencies. You can also better organize which containers use your computer’s network, and which use a virtual network where you can redirect an incoming port to avoid conflicts.

          Containers are like VMs, but for an application instead of a whole OS, though you can put multiple apps in one container. Good for if they need to share files.

          For a more visual approach, look into Portainer. It gives you an admin page you can open in your browser to manage docker containers.

          • Tired8281@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            1 year ago

            I actually have Portainer set up and running, and I even spun up a few simple containers in it. Unfortunately I did so by following a guide to complete a specific task. I completed the task successfully, but now I have a Portainer install that I don’t understand in the slightest, and don’t know how to update it or any of the containers in it, or really do anything that wasn’t covered in the guide I followed (which I now cannot find). I found a YouTube video that tries to explain Portainer, but I don’t know the terminology of Docker enough to understand what they are saying, and I haven’t found a Docker video simple enough to bring me up to speed.

            • DrWeevilJammer@lm.rdbt.no
              link
              fedilink
              English
              arrow-up
              2
              ·
              1 year ago

              The easiest way to think about docker is to consider it a type of virtual machine, like something you’d use VirtualBox for.

              So let’s say you run Windows, but want to try out Linux. You’d could install Ubuntu in a VirtualBox VM, and then install software that works on Ubuntu in that VM, and it’s separate from Windows.

              Docker is similar to this in that a docker container for a piece off software often includes an entire operating system within it, complete with all of the correct versions of drivers that the software needs to function. This is all in a sandbox/container that does not really interact with the host operating system.

              As to why this is convenient: Let’s say that you have a computer running Ubuntu natively/bare metal. It has a certain version of python installed that you need to run the applications you use. But there’s some new software you want to try that uses a later version of python that will break your other apps if you upgrade.

              The developer of that software you want to try makes a docker version available. There’s a docker-compose.yml file that specifies things like the port the application will be available on, the time zone your computer is in, the location of the docker files on dockerhub, etc. You can modify this file if you like, and when you are done, you type docker compose up -d in the terminal (in the same directory as the docker-compose.yml file).

              Docker will then read the compose file, download the required files from the repository, extract them, set up the network and the web server and configure everything else specified in the compose file. Then you open a browser, type in the address of the machine the compose file is on, followed by the port number in the compose file (ex: http://192.168.1.100:5000), and boom, there’s your software.

              You can use the new software with the newer version of python at the same time as the old stuff installed directly on your machine.

              You can leave it running all the time, or bring it down by typing docker compose down. Need to upgrade to a new version? Bring the container down, type docker compose pull, which tells docker to pull the latest version from the repository, then docker compose up -d to bring the updated version back up again.

              Portainer is just a GUI that runs docker commands “under the hood”.

        • SkyNTP@lemmy.ml
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          I think the real benefits of Docker don’t become unquestionably obvious untill you’ve ever tried to manage more than one installation of some kind of server software in the same machine and inevitably learn the hard way that this comes with a lot of problems and downsides.

          • From simple things like if the environment needs a restart, you can just restart the container, without rebooting the machine, interrupting other applications.
          • To seriously dangerous and problematic things, like configuring your system to work with your new application only to realize that this configuration is breaking your other server software.
          • Tired8281@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            1 year ago

            So far I’ve avoiding learning about Docker by just buying a new old end-of-life Chromebook when I wanted to run anything. Works pretty well, except for the giant pile of Chromebooks behind my TV.

            • Tayphix@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              1 year ago

              I would really recommend just playing around with Docker until you understand it rather than buying old hardware for each service.

  • poVoq@slrpnk.net
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Glasfiber home internet has also really improved the available upload speed, which is great for self-hosters.