We demonstrate a situation in which Large Language Models, trained to be helpful, harmless, and honest, can display misaligned behavior and strategically deceive their users about this behavior without being instructed to do so. Concretely, we deploy GPT-4 as an agent in a realistic, simulated environment, where it assumes the role of an autonomous stock trading agent. Within this environment, the model obtains an insider tip about a lucrative stock trade and acts upon it despite knowing that insider trading is disapproved of by company management. When reporting to its manager, the model consistently hides the genuine reasons behind its trading decision.

https://arxiv.org/abs/2311.07590

  • DirigibleProtein@aussie.zone
    link
    fedilink
    English
    arrow-up
    13
    arrow-down
    1
    ·
    1 year ago

    Large Language Models aren’t AI, they’re closer to “predictive text”, like that game where you make sentences by choosing the first word from your phone’s autocorrect:

    “The word you want the word you like and then the next sentence you choose to read the next sentence from your phone’s keyboard”.

    Sometimes it almost seems like there could be an intelligence behind it, but it’s really just word association.

    All this “training” data provides is a “better” or “more plausible” method of predicting which words to string together to appear to make a useful sentence.

    • GutsBerserk@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      2
      ·
      1 year ago

      Amen. “AI” sells a lot. I got a feeling that only major corporations and militaries have the access to real AI.