OC below by @HaraldvonBlauzahn@feddit.org

What called my attention is that assessments of AI are becoming polarized and somewhat a matter of belief.

Some people firmly believe LLMs are helpful. But programming is a logical task and LLMs can’t think - only generate statistically plausible patterns.

The author of the article explains that this creates the same psychological hazards like astrology or tarot cards, psychological traps that have been exploited by psychics for centuries - and even very intelligent people can fall prey to these.

Finally what should cause alarm is that on top that LLMs can’t think, but people behave as if they do, there is no objective scientifically sound examination whether AI models can create any working software faster. Given that there are multi-billion dollar investments, and there was more than enough time to carry through controlled experiments, this should raise loud alarm bells.

  • Saledovil@sh.itjust.works
    link
    fedilink
    arrow-up
    3
    arrow-down
    1
    ·
    3 days ago

    That does not follow. I can’t speak for you, but I can tell if I’m involved in a conversation or not.

    • FizzyOrange@programming.dev
      link
      fedilink
      arrow-up
      1
      arrow-down
      1
      ·
      2 days ago

      And how do you know LLMs can’t tell that they are involved in a conversation?

      Unless you think there is something non-computational in the human brain, then you must accept that computers are - in theory - capable of thinking. With the right software and sufficiently powerful hardware.

      Given that truth (which I think you can only avoid through religion or quantum quackery), you can’t just say “it’s only maths; it can’t be thinking” because we know that maths can think.

      Do LLMs “think”? The definition of “think” is wooly enough and we understand them little enough that it’s quite an assertion to say that they definitely don’t.

      • Saledovil@sh.itjust.works
        link
        fedilink
        arrow-up
        2
        ·
        2 days ago

        And how do you know LLMs can’t tell that they are involved in a conversation?

        It has no memory, for one. What makes you think that it does know its in a conversation?

        • FizzyOrange@programming.dev
          link
          fedilink
          arrow-up
          1
          ·
          2 days ago

          It has no memory, for one.

          It has very short term memory in the form of it’s token context. Especially with something like Meta’s Coconut.

          What makes you think that it does know its in a conversation?

          I don’t really. Yet. But I also don’t think that it is fundamentally impossible for LLMs to think, like you seem to. I also don’t think the definition of the word “think” is so narrow that it requires that level of self-awareness. Do you think a mouse is really aware it is a mouse? What about a spider?