The researchers started by sketching out the problem they wanted to solve in Python, a popular programming language. But they left out the lines in the program that would specify how to solve it. That is where FunSearch comes in. It gets Codey to fill in the blanks—in effect, to suggest code that will solve the problem.

A second algorithm then checks and scores what Codey comes up with. The best suggestions—even if not yet correct—are saved and given back to Codey, which tries to complete the program again. “Many will be nonsensical, some will be sensible, and a few will be truly inspired,” says Kohli. “You take those truly inspired ones and you say, ‘Okay, take these ones and repeat.’”

After a couple of million suggestions and a few dozen repetitions of the overall process—which took a few days—FunSearch was able to come up with code that produced a correct and previously unknown solution to the cap set problem, which involves finding the largest size of a certain type of set. Imagine plotting dots on graph paper. The cap set problem is like trying to figure out how many dots you can put down without three of them ever forming a straight line.

  • yesman@lemmy.worldOP
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    8
    ·
    1 year ago

    I thought this was interesting bc it’s an instance where a LLM has done something undeniably novel and unique while expanding human understanding. It’s a chink in the armor of the idea that a LLM is a “stochastic parrot” that can only regurgitate and never create.

    I’ve been toying with this idea that LLM are showing us that what we thought of as creativity, learning, and problem solving aren’t as rarefied as we thought. We know that AI isn’t conscious, maybe consciousness isn’t as prerequisite to behaviors and cognition as we thought.

    • jacksilver@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      1
      ·
      1 year ago

      I’m not so sure, it feels a lot more like the https://en.wikipedia.org/wiki/Infinite_monkey_theorem, but with a model helping limit the outputs so they are mostly usable. As is stated in the article, it took millions of runs and couple of days to get the results. So its more like brute forcing with a slightly modified genetic algorithm than anything else.

      I didn’t see a link to the full article, so maybe something more creative is happening behind the scenes, but it seems unlikely.

      • thesmokingman@programming.dev
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        1 year ago

        Your interpretation is correct. There’s no new logic here, just new special cases of a problem whose general solution is still unknown. I think it’s pretty cool and has a lot of value in places like design theory where the getting examples to try and play around with general solution ideas is really tough. But all it did was creatively crunch numbers.

    • lemmyvore@feddit.nl
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      2
      ·
      1 year ago

      This approach sounds more like selective breeding to me.

      If you do this with cats and select in each generation until you obtain a particularly fluffy cat, the cat doesn’t get the credit. Nobody says “wow, how smart are cats for achieving this”, they praise the breeder instead.

      Which is as it should. The people who seed and select these algorithms and can recognize a breakthrough deserves the credit not the churning machine that goes through millions of permutations blindly.