Mind-reading AI can translate brainwaves into written text: Using only a sensor-filled helmet combined with artificial intelligence, a team of scientists has announced they can turn a person’s thou…::A system that records the brain’s electrical activity through the scalp can turn thoughts into words with help from a large language model – but the results are far from perfect

  • orgrinrt@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    I’m not disputing that our intelligence isn’t more sophisticated, but rather that maybe the “intelligence” in llms is not necessarily all that different from ours, just based on different and limited inputs, and trained on a vastly less wide data.

    • knightly the Sneptaur@pawb.social
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      But it is, necessarily.

      For example, when we make shit up, we’re aware that the shit we made up isn’t real. LLMs are structurally incapable of recognizing the distinction between facts they regurgitate and the ones they manufacture from whole cloth.

      You didn’t have to consume terabytes of text to build a model for how to form sentences like a human, you did that with a few megabytes of overheard conversation before you were even conscious enough to be aware of it.

      There’s no model of intelligence so over-simplified to the point of giving LLMs partial credit that wouldn’t also give equivalent credence to the “intelligence” of search engines.