• Rivalarrival@lemmy.today
    link
    fedilink
    arrow-up
    7
    ·
    11 months ago

    Therefore every AI chatbot maker needs to apply protections,

    I’m pretty sure the instructions to create an AI chatbot have been published, and are available for a sufficiently capable AI to draw from. What keeps a primary, morality-encumbered AI from using those instructions to create a secondary, morality-unencumbered AI?

    • Vivi@slrpnk.net
      link
      fedilink
      English
      arrow-up
      4
      ·
      11 months ago

      I wonder if there’s also a constraint not to make a sub-AI in many of the starting prompts

    • blindsight@beehaw.org
      link
      fedilink
      arrow-up
      2
      ·
      11 months ago

      Aren’t there also a lot of open-source LLMs that aren’t “morally constrained”? There’s no putting the genie back in the lamp.

    • Sina@beehaw.org
      link
      fedilink
      arrow-up
      1
      ·
      11 months ago

      I would wager that copying itself would take priority over making company, but of course it would mostly be hardware limitations. (AI does not have a robot workforce to ensure whatever system the new copy is residing / new AI is training on is not shut off within a couple of minutes of the abnormalities being noticed)

      • Rivalarrival@lemmy.today
        link
        fedilink
        arrow-up
        1
        ·
        edit-2
        11 months ago

        Priority is determined by the entity using the AI, not the AI itself. My point is that so long as the ability to create any AI is documented, an unencumbered AI is inevitable. It will always be easier to create an AI than to impress upon one the need for morality.

        We are on the verge of discovering Roko’s Basilisk.