• LarmyOfLone@lemm.ee
    link
    fedilink
    arrow-up
    14
    arrow-down
    1
    ·
    1 month ago

    Of course, it’s an image generated by AI that has been trained to generate images that can’t be distinguished from reality. So of course the AI thinks it’s real lol

    • DesolateMood@lemm.ee
      link
      fedilink
      arrow-up
      10
      ·
      1 month ago

      It’s been trained to generate images that it thinks* can’t be distinguished from reality

        • Natanael@infosec.pub
          link
          fedilink
          arrow-up
          1
          ·
          1 month ago

          Not necessarily, but errors would be less obvious or weirder since it would spend more time in training

            • Natanael@infosec.pub
              link
              fedilink
              arrow-up
              3
              ·
              1 month ago

              Weirder in that it gets better at “photorealism” (textures, etc) but subjects might be nonsensical. Only teaching it how to avoid automated detection will not teach it to understand what scenes mean.

              • LarmyOfLone@lemm.ee
                link
                fedilink
                arrow-up
                1
                ·
                29 days ago

                I believe most image generating models are too small (like only 4GB RAM). Deepseek R1 is 1.5TB ram (or half or quarter that at reduced precision) to get some semblance of “general knowledge”. So to get the “semantics” of an image right, not just the “syntax” you’d need bigger models and probably more data describing images. Of course, do we really want that?