• madsen@lemmy.world
    link
    fedilink
    English
    arrow-up
    12
    ·
    edit-2
    5 days ago

    Assuming that the scammers use some form of LLM too, then we’ve got LLMs calling other LLMs. What a fucking waste. It’s like an upscaled version of senders using LLMs to expand their emails and recipients using LLMs to summarize them.

  • chryan@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    5 days ago

    It’s all fun and games until the scammers use AI themselves to massively scale their operations.

    • TheFriar@lemm.ee
      link
      fedilink
      English
      arrow-up
      12
      ·
      5 days ago

      It’s been reported widely that it’s already happening. They use phone banks to scam, they use AI to scam. If it’s out there, it’s being used to scam.

      • Usernameblankface@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        4 days ago

        It’s my understanding that LLM’s are thoroughly unsafe, always reporting everything it does and every input back to whoever made the LLM. So, wouldn’t it be easy for whoever owns the LLM to see what it’s being used for, and to refuse service to scammers?

    • wabafee@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      5 days ago

      Good chance it’s probably happening already. Worst part is both eat so much power.

  • Qkall@lemmy.ml
    link
    fedilink
    English
    arrow-up
    1
    ·
    6 days ago

    Didn’t kitboga do this like last year. I think his crypto maze is the answer