Probably should’ve just asked Wolfram Alpha

  • skuzz@discuss.tchncs.de
    link
    fedilink
    arrow-up
    1
    ·
    6 days ago

    LLMs are really fucking bad at math. They’re trying to find the statistical close answer, not doing computation. It’s rather mind-numbingly dumb.

    • kahnclusions@programming.dev
      link
      fedilink
      arrow-up
      1
      ·
      6 days ago

      Unfortunately a shockingly large number of people don’t get this… including my old boss who was running an AI-based startup 💀

  • notfromhere@lemmy.ml
    link
    fedilink
    arrow-up
    0
    ·
    6 days ago

    I read it as “A third of a third plus a third is a half.” Which makes sense to me. What an I missing?

    • Something Burger 🍔@jlai.lu
      link
      fedilink
      arrow-up
      3
      ·
      edit-2
      6 days ago

      It’s wrong. 1/3 + (1/3 * 1/3) = 3/9 + 1/9 = 4/9. It’s close though.

      However, one third plus one half of a third is correct. 1/3 + (1/2 * 1/3) = 1/3 + (1.5/3 * 1/3) = 1/3 + 0.5/3 = 1.5/3 = 1/2

    • Pyro@programming.dev
      link
      fedilink
      arrow-up
      1
      ·
      8 days ago

      What, your printer doesn’t have a full keyboard under its battery? You’ve gotta get with the times my man.

    • Otter@lemmy.ca
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      7 days ago

      It sounds like some weird ritual that someone scratched into a notebook.

      𝗯𝗮𝗰𝗸 𝗼𝗳 𝗽𝗿𝗶𝗻𝘁𝗲𝗿?? under battery, m͟u͟s͟t͟ f͟i͟n͟d͟ k͟e͟y͟s͟

        • Otter@lemmy.ca
          link
          fedilink
          English
          arrow-up
          1
          ·
          7 days ago

          I actually sent a bunch of prompts through image generators till it gave something close to what I wanted

          Using generative AI to try and visualize generative AI

    • just_an_average_joe@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      0
      ·
      6 days ago

      I think thats an issue with AI, it has been so much trained on complex questions that now when you ask a simple one, it mistakes it for a complex one and answers it that way

      • Kichae@lemmy.ca
        link
        fedilink
        English
        arrow-up
        1
        ·
        6 days ago

        It’s auto-complete. It knows that “4” is the most common substring to follow “2 + 2” in its training. It’s not actually doing addition.

      • sping@lemmy.sdf.org
        link
        fedilink
        English
        arrow-up
        1
        ·
        6 days ago

        The issue is it’s an LLM. It puts words in an order that’s statistically plausible but has no reasoning power.