At the moment I am currently using Cloudflare as a way to provide SSL to my self-hosted site. The site sits behind a residential connection that blocks incoming data on commonly used ports including 80 and 443. It’s a perfectly fine and reasonable solution which does what I want. But I’m looking to try something different.

What I would like to try is using Let’s Encrypt on a non standard port. I understand there are plenty of good reasons not do this, mainly that some places such as workplaces may block higher number ports for security reasons. That’s fair but I am still interested in learning how to encrypt uncommon ports with Let’s Encrypt.

Currently I am using Nginx Proxy Manager to handle Let’s Encrypt certificates. It’s able to complete the DNS Challenge required to prove I own the domain name and handles automated certificate renewals as well. Currently I have NPM acting as a reverse proxy guiding outside connections from Cloudflare on port 5050 to port 80 on NPM. Then the connection gets sent out locally to port 81 which is the admin web page for NPM (I’m just using it as a page to test if the connection is secured).

Whenever I enable Let’s Encrypt SSL and try to connect to my site, the connection times out and nothing happens. I’m not sure if Let’s Encrypt is expecting to reach ports 80/443 or if there is something wrong with my reverse proxy settings that breaks the encryption along the way. Most discussions just assume ports 80/443 are open which is fair since that’s the most common situation. The few sites discussing the use of uncommon ports are either many years dated or people talking about success without sharing any details. I’m sort of at the end of what I can search at this point.

What I’m hoping to learn out of all this is how encryption and reverse proxies work together because those two things have been a struggle for me to understand as a whole throughout this whole learning process. I would appreciate it a lot of anyone had any resources or experiences to share about this.

  • LedgeDrop@lemm.ee
    link
    fedilink
    arrow-up
    3
    ·
    7 days ago

    I’ve got a similar set up and everything works. So, I can confirm that your assumptions are sound.

    My solution is kubernetes based, so I use cert-Manager to issue/create the Let’s Encrypt (using DNS as the verification mechanism), when gets fed into a Traefik Reverse Proxy. Traefik is running on a non-standard port, which I can access from the outside world.

    I’d suggest tearing your current system down and verify everything is configured correctly.

    For example :

    • Take a look at the SSL cert. Is it generated properly?
    • Look at the reverse proxy. Is it using the proper SSL cert and is it properly formatted? (I’ve found curl - -verbose - - insecure https://... to be helpful)
    • Maybe add a static file (ie: robots.txt) to nginx. This would allow you to see if the problem is between the outside world and nginx or between nginx and your service.
    • You can also use the “snake oil” cert, in a pinch. It’s an insecure SSL cert, but it would allow you to confirm that your nginx is properly configured and it would confirm that the issue is with the Lets Encrypt cert (or that process/payload).

    … and not to rob you of this experience, but you might want to look into Cloudflare Tunnels. It allows you to run services within your network, but are exposed/accessible directly from Cloudflare. It’s entirely secure (actually more so than your proposed system) and you don’t need to mess around with SSL.

    • confusedpuppy@lemmy.dbzer0.comOP
      link
      fedilink
      arrow-up
      2
      ·
      edit-2
      5 days ago

      I’ll give your suggestions a try when I get the motivation to try again. Sort of burnt myself out at the moment and would like to continue with other stuff.

      I am actually using the Cloudflare Tunnel with SSL enabled which is how I was able to achieve that in the first place.

      For the curious here are the steps I took to get that to work:

      This is on a Raspberry Pi 5 (arm64, Raspberry Pi OS/Debian 12)

      # Cloudflared -> Install & Create Tunnel & Run Tunnel
                       -> https://developers.cloudflare.com/cloudflare-one/connections/connect-networks/get-started/create-local-tunnel/
                          -> Select option -> Linux
                          -> Step 4: Change -> credentials-file: /root/.cloudflared/<Tunnel-UUID>.json -> credentials-file: /home/USERNAME/.cloudflared/<Tunnel-UUID>.json
                    -> Run as a service
                       -> Open new terminal
                       -> sudo cp ~/.cloudflared/config.yml /etc/cloudflared/config.yml
                       -> https://developers.cloudflare.com/cloudflare-one/connections/connect-networks/configure-tunnels/local-management/as-a-service/
                    -> Configuration (Optional) -> https://developers.cloudflare.com/cloudflare-one/connections/connect-networks/configure-tunnels/local-management/configuration-file/
                       -> sudo systemctl restart cloudflared
                    -> Enable SSL connections on Cloudflare site
                       -> Main Page -> Websites -> DOMAINNAME.COM -> SSL/TLS -> Configure -> Full -> Save
                          -> SSL/TLS -> Edge Certificates -> Always Use HTTPS: On -> Opportunistic Encryption: On -> Automatic HTTPS Rewrites: On -> Universal SSL: Enabled
      

      Cloudflared complains about ~/.cloudflared/config.yml and /etc/cloudflared/config.yml not matching. I just edit ~/.cloudflared/config.yml and run sudo cp ~/.cloudflared/config.yml /etc/cloudflared/config.yml again followed by sudo systemctl restart cloudflared whenever I make any changes.

      The configuration step is just there as reference for myself, it’s not necessary for a simple setup.

      The tunnel is nice and convenient. It does the job well. I just have a strong personal preference to not depend on large organizations. I’ve installed Timeshift as a backup management for myself so I can easily revisit this topic later when my brain is ready.