Screenshot of this question was making the rounds last week. But this article covers testing against all the well-known models out there.

Also includes outtakes on the ‘reasoning’ models.

  • FaceDeer@fedia.io
    link
    fedilink
    arrow-up
    0
    ·
    11 days ago

    And that score is matched by GPT-5. Humans are running out of “tricky” puzzles to retreat to.

    • realitista@lemmus.org
      link
      fedilink
      English
      arrow-up
      0
      ·
      10 days ago

      You’re getting downvoted but it’s true. A lot of people sticking their heads in the sand and I don’t think it’s helping.

      • FaceDeer@fedia.io
        link
        fedilink
        arrow-up
        0
        ·
        10 days ago

        Yeah, “AI is getting pretty good” is a very unpopular opinion in these parts. Popularity doesn’t change the results though.

          • MangoCats@feddit.it
            link
            fedilink
            English
            arrow-up
            0
            ·
            10 days ago

            It’s overhyped in many areas, but it is undeniably improving. The real question is: will it “snowball” by improving itself in a positive feedback loop? If it does, how much snow covered slope is in front of it for it to roll down?

              • kescusay@lemmy.world
                link
                fedilink
                English
                arrow-up
                0
                ·
                10 days ago

                It’s already happening. GPT 5.2 is noticeably worse than previous versions.

                It’s called model collapse.

                • Zos_Kia@jlai.lu
                  link
                  fedilink
                  English
                  arrow-up
                  0
                  ·
                  10 days ago

                  To clarify : model collapse is a hypothetical phenomenon that has only been observed in toy models under extreme circumstances. This is not related in any way to what is happening at OpenAI.

                  OpenAI made a bunch of choices in their product design which basically boil down to “what if we used a cheaper, dumber model to reply to you once in a while”.

                  • MangoCats@feddit.it
                    link
                    fedilink
                    English
                    arrow-up
                    0
                    ·
                    6 days ago

                    I feel that a lot of what is improving in the recent batch of model releases is the vetting of their training data - basically the opposite of model collapse.

                    Nothing requires an LLM to train on the entire internet.