• Cethin@lemmy.zip
    link
    fedilink
    English
    arrow-up
    31
    ·
    1 year ago

    I doubt it. It’s the halting problem. There are perfectly legitimate uses for similar things that you can’t detect if it’ll halt or not prior to running it. Maybe they’d patch it to avoid this specific string, but you’d just have to make something that looks like it could do something but never halts.

    • NιƙƙιDιɱҽʂ@lemmy.world
      link
      fedilink
      arrow-up
      20
      arrow-down
      1
      ·
      1 year ago

      That’s why I run all my terminal commands through ChatGPT to verify they aren’t some sort of fork bomb. My system is unusably slow, but it’s AI protected, futuristic, and super practical.

      • 🦥󠀠󠀠󠀠󠀠󠀠󠀠@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        ·
        1 year ago

        Seems inefficient, one should just integrate ChatGPT into Bash to automatically check these things.

        You said ‘ls’ but did you really mean ‘ls -la’? Imma go ahead and just give you the output from ‘cat /dev/urandom’ anyway.

    • Marxism-Fennekinism@lemmy.ml
      link
      fedilink
      English
      arrow-up
      4
      ·
      edit-2
      1 year ago

      They could always do what Android does and give you a prompt to force close an app that hangs for too long, or have a default subprocess limit and an optional whitelist of programs that can have as many subprocesses as they want.

      • barsoap@lemm.ee
        link
        fedilink
        arrow-up
        8
        ·
        edit-2
        1 year ago

        The thing about fork bombs that it’s not particular process which takes up all the resources, they’re all doing nothing in a minimal amount of space. You could say “ok this group of processes is using a lot of resources” and kill it but then you’re probably going to take down the whole user session as the starting point is not trivial to establish. Though I guess you could just kill all shells connected to the fork morass, won’t fix the general case but it’s a start. OTOH I don’t think kernel devs are keen on special-case solutions.

        • sus@programming.dev
          link
          fedilink
          arrow-up
          4
          ·
          edit-2
          1 year ago

          You don’t really have to kill every process, limiting spawning of new usermode processes after a limit has been reached should be enough, combine that with a warning and always reserving enouh resources for the kernel and critically important processes to remain working and the user should have all the tools needed to find what is causing the issue and kill the responsible processes

          While nobody really cares enough to fix these kinds of problems for your basic home computer, I think this problem is mostly solved for cloud/virtualization providers