What is everyone doing? SELinux? AppArmor? Something else?

I currently leave my nextcloud exposed to the Internet. It runs in a VM behind an nginx reverse proxy on the VM itself, and then my OPNSense router runs nginx with WAF rules. I enforce 2fa and don’t allow sign-ups.

My goal is protecting against ransomware and zerodays (as much as possible). I don’t do random clicking on links in emails or anything like that, but I’m not sure how people get hit with ransomware. I keep nextcloud updated (subscribed to RSS update feed) frequently and the VM updates everyday and reboots when necessary. I’m running the latest php-fpm and that just comes from repos so it gets updated too. HTTPS on the lan with certificates maintained by my router, and LE certs for the Internet side.

Beside hiding this thing behind a VPN (which I’m not prepared to do currently), is there anything else I’m overlooking?

  • Björn Tantau@swg-empire.de
    link
    fedilink
    English
    arrow-up
    33
    ·
    11 months ago

    For protection against ransomware you need backups. Ideally ones that are append-only where the history is preserved.

    • thisisawayoflife@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      5
      ·
      11 months ago

      Good call. I do some backups now but I should formalize that process. Any recommendations on selfhost packages that can handle the append only functionality?

      • 𝕽𝖔𝖔𝖙𝖎𝖊𝖘𝖙@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        ·
        edit-2
        11 months ago

        I use and love Kopia for all my backups: local, LAN, and cloud.

        Kopia creates snapshots of the files and directories you designate, then encrypts these snapshots before they leave your computer, and finally uploads these encrypted snapshots to cloud/network/local storage called a repository. Snapshots are maintained as a set of historical point-in-time records based on policies that you define.

        Kopia uses content-addressable storage for snapshots, which has many benefits:

        Each snapshot is always incremental. This means that all data is uploaded once to the repository based on file content, and a file is only re-uploaded to the repository if the file is modified. Kopia uses file splitting based on rolling hash, which allows efficient handling of changes to very large files: any file that gets modified is efficiently snapshotted by only uploading the changed parts and not the entire file.

        Multiple copies of the same file will be stored once. This is known as deduplication and saves you a lot of storage space (i.e., saves you money).

        After moving or renaming even large files, Kopia can recognize that they have the same content and won’t need to upload them again.

        Multiple users or computers can share the same repository: if different users have the same files, the files are uploaded only once as Kopia deduplicates content across the entire repository.

        There’s a ton of other great features but that’s most relevant to what you asked.

      • patchexempt@lemmy.zip
        link
        fedilink
        English
        arrow-up
        3
        ·
        11 months ago

        I’ve used rclone with backblaze B2 very successfully. rclone is easy to configure and can encrypt everything locally before uploading, and B2 is dirt cheap and has retention policies so I can easily manage (per storage pool) how long deleted/changed files should be retained. works well.

        also once you get something set up. make sure to test run a restore! a backup solution is only good if you make sure it works :)

        • thisisawayoflife@lemmy.worldOP
          link
          fedilink
          English
          arrow-up
          2
          ·
          11 months ago

          As a person who used to be “the backup guy” at a company, truer words are rarely spoken. Always test the backups otherwise it’s an exercise in futility.

      • tuhriel@infosec.pub
        link
        fedilink
        English
        arrow-up
        2
        ·
        11 months ago

        Restic can do append-only when you use their rest server (easily deployed in a docker container)

  • beerclue@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    ·
    11 months ago

    Not only for Nextcloud, but I recommend setting up crowdsec for any publicly facing service. You’d be surprised by the amount of bots and script kiddies out there trying their luck…

    • thisisawayoflife@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      11 months ago

      One of my next steps was hardening my OPNSense router as it handles all the edge network reverse proxy duties, so IDS was in the list. I’m digging into Crowdsec now, it looks like there’s an implementation for OPNsense. Thanks for the tip!

      • johntash@eviltoast.org
        link
        fedilink
        English
        arrow-up
        5
        ·
        11 months ago

        Iirc crowdsec is like fail2ban but blocks ips reported by other servers, not just ones attacking your server. Kinda like a distributed fail2ban I guess?

      • TwinHaelix@reddthat.com
        link
        fedilink
        English
        arrow-up
        4
        ·
        11 months ago

        My recollection is that Fail2Ban has some default settings, but is mostly reactionary in terms of blacklisting things that it observes trying to get in. Crowdsec behaves in a similar vein but, as the name implies, includes a lot of crowdsourced rules and preventative measures.

      • Comptero@feddit.ch
        link
        fedilink
        English
        arrow-up
        1
        ·
        11 months ago

        In my understanding fail2ban will block ips if they are detected to do brutforce or use known exploits.

        Crowdsec will share this IP via a blocklist to all subscribte systems. You will benefit form the detection of other systems and not only your own.

  • thatsnothowyoudoit@lemmy.ca
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    1
    ·
    edit-2
    11 months ago

    Nextcloud isn’t exposed, only a WireGuard connection allows for remote access to Nextcloud on my network.

    The whole family has WireGuard on their laptops and phones.

    They love it, because using WireGuard also means they get a by-default ad-free/tracker-free browsing experience.

    Yes, this means I can’t share files securely with outsiders. It’s not a huge problem.

    • BearOfaTime@lemm.ee
      link
      fedilink
      English
      arrow-up
      2
      ·
      11 months ago

      Tailscale has a feature called Funnel that enables you to share a resource over Tailscale to users who don’t have Tailscale.

      Wonder if Wireguard has something similar (Tailscale uses Wireguard)

  • johntash@eviltoast.org
    link
    fedilink
    English
    arrow-up
    6
    ·
    11 months ago

    Make sure your backups are solid and can’t be deleted or altered.

    In addition to normal backups, something like zfs snapshots also help and make it easier to restore if needed.

    I think I remember seeing a nextcloud plugin that detects mass changes to a lot of files (like ransomware would cause). Maybe something like that would help?

    Also enforce good passwords.

    Do you have anything exposed to the internet that also has access to either nextcloud or the server it’s running on? If so, lock that down as much as possible too.

    Fail2ban or similar would help against brute force attacks.

    The VM you’re running nextcloud on should be as isolated as you can comfortably make it. E.g. if you have a camera/iot vlan, don’t let the VM talk to it. Don’t let it initiate outbound connections to any of your devices, etc

    You can’t entirely protect against zero day vulnerabilities, but you can do a lot to limit the risk and blast radius.

  • Decronym@lemmy.decronym.xyzB
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    11 months ago

    Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I’ve seen in this thread:

    Fewer Letters More Letters
    DNS Domain Name Service/System
    HTTP Hypertext Transfer Protocol, the Web
    HTTPS HTTP over SSL
    PiHole Network-wide ad-blocker (DNS sinkhole)
    SSL Secure Sockets Layer, for transparent encryption
    VPN Virtual Private Network
    nginx Popular HTTP server

    [Thread #394 for this sub, first seen 1st Jan 2024, 18:55] [FAQ] [Full list] [Contact] [Source code]

  • lemmyvore@feddit.nl
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    2
    ·
    11 months ago

    All the measures you listed amount to nothing against a zero day remote exploit. They bypass the normal authentication process.

    If you’re not able to use a VPN then use a IAM layer, which requires you to login through another method. You can use a dedicated app like Authelia/Authentik in front of the reverse proxy, or if you use nginx as reverse proxy you also have to option of using the vouch-proxy plugin.

  • hottari@lemmy.ml
    link
    fedilink
    English
    arrow-up
    4
    ·
    11 months ago

    I’ve had my Nextcloud exposed for a long while now without any incidents (that I know of). I know automatic updates are not generally recommended but if you want a lighter load, you could use LSIO’s docker container (I use the standard db in the sample config). I run mine that way with watchtower and can’t recall ever in recent times when an update broke Nextcloud. Other than that, nextcloud has a brute-force plugin and you could consider overall hardening the entry points of the machine hosting Nextcloud (e.g ssh).

  • JustinAngel@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    11 months ago

    Yikes! I’d avoid leaving any services externally exposed unless they’re absolutely necessary…

    Tailscale+Headscale are pretty easy to implement these days. Since it’s effectively zero trust, the tunnels become the encrypted channel so there’s an argument that HTTPS isn’t really required unless some endpoints won’t be accessing services over the Tailnet. SmallStep and Caddy can be used to automatically manage certs if it’s needed though.

    You can even configure a PiHole (or derivative) to be your DNS server on the VPN, giving you ad blocking on the go.

    • TechLich@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      11 months ago

      there’s an argument that HTTPS isn’t really required…

      Talescale is awesome but you gotta remember that Talescale itself is one of those services (Yikes). Like all applications it’s potentially susceptible to vulnerabilities and exploits so don’t fall into the trap of thinking that anything in your private network is safe because it’s only available through the VPN. “Defence in depth” is a thing and you have nothing to lose from treating your services as though they were public and having multiple layers of security.

      The other thing to keep in mind is that HTTPS is not just about encryption/confidentiality but also about authenticity/integrity/non-repudiation. A cert tells you that you are actually connecting to the service that you think you are and it’s not being impersonated by a man in the middle/DNS hijack/ARP poison, etc.

      If you’re going to the effort of hosting your own services anyway, might as well go to the effort of securing them too.

  • Possibly linux@lemmy.zip
    link
    fedilink
    English
    arrow-up
    2
    ·
    11 months ago

    I would move it into docker as that will give you a extra layer of security and simplify updates.

    From there make sure you have backups that aren’t easily deleted. Additionally make sure your reverse proxy is setup correctly and implements proper security.