I’ve been play around with ollama. Given you download the model, can you trust it isn’t sending telemetry?

  • marcie (she/her)@lemmy.ml
    link
    fedilink
    arrow-up
    32
    arrow-down
    2
    ·
    2 days ago

    you can check the process to see if its communicating at all. none of the big ones do. its possible someone could be fucking with the file though, before the safetensors format this was a big issue, and still sort of is afterwards. only DL from reputable sources

    • Jack@slrpnk.net
      link
      fedilink
      arrow-up
      6
      arrow-down
      3
      ·
      2 days ago

      Can’t you run if from a container? I guess the will slow it down, but it will deny access to your files.

      • acockworkorange@mander.xyz
        link
        fedilink
        arrow-up
        8
        ·
        2 days ago

        Containers don’t really slow down apps significantly. It’s not a VM, it’s still a native app running in your kernel, just on a separate memory space and restricted access to hardware.

        • Jack@slrpnk.net
          link
          fedilink
          arrow-up
          2
          arrow-down
          1
          ·
          2 days ago

          That is true for Linux and maybe Mac, but on windows I think they have a bit more overhead. But again I agree that in most cases it is not significant.

          • acockworkorange@mander.xyz
            link
            fedilink
            arrow-up
            5
            ·
            edit-2
            1 day ago

            Is the overhead because of containers or is it because you’re running something that is meant to run on Linux and is using a conversion layer like MinGW ?

            • stink@lemmygrad.ml
              link
              fedilink
              English
              arrow-up
              1
              ·
              1 day ago

              Windows > Windows Subsystem for Linux (WSL) Ubuntu > docker container

              I think WSL 2 actually runs Linux in a virtual environment. I’ve tried getting my own LLM instance running on my windows machine but it’s been such a pain.

      • marcie (she/her)@lemmy.ml
        link
        fedilink
        arrow-up
        14
        arrow-down
        1
        ·
        2 days ago

        yeah you could. though i dont see any evidence that the large open source llm programs like jan.ai or ollama are doing anything wrong with their program or files. chucking it in a sandbox would solve the problem for good though

        • SeekPie@lemm.ee
          link
          fedilink
          arrow-up
          6
          ·
          edit-2
          2 days ago

          You could use “Alpaca” flatpak and remove the internet access with flatseal after having downloaded the model. (Linux)

          Or deny the app’s access to internet in app settings. (Android)