OC below by @HaraldvonBlauzahn@feddit.org

What called my attention is that assessments of AI are becoming polarized and somewhat a matter of belief.

Some people firmly believe LLMs are helpful. But programming is a logical task and LLMs can’t think - only generate statistically plausible patterns.

The author of the article explains that this creates the same psychological hazards like astrology or tarot cards, psychological traps that have been exploited by psychics for centuries - and even very intelligent people can fall prey to these.

Finally what should cause alarm is that on top that LLMs can’t think, but people behave as if they do, there is no objective scientifically sound examination whether AI models can create any working software faster. Given that there are multi-billion dollar investments, and there was more than enough time to carry through controlled experiments, this should raise loud alarm bells.

  • 6nk06@sh.itjust.works
    link
    fedilink
    arrow-up
    14
    arrow-down
    4
    ·
    2 days ago

    Since LLMs runs on CPUs with a lot of memory, do you agree that my calculator is thinking?

    • FizzyOrange@programming.dev
      link
      fedilink
      arrow-up
      3
      arrow-down
      1
      ·
      1 day ago

      This argument makes no more sense than trying to say that a plant is thinking because brains are made of cells and so are plants.

    • Kuinox@lemmy.world
      link
      fedilink
      arrow-up
      1
      arrow-down
      7
      ·
      edit-2
      2 days ago

      You think computation is thinking ?
      I asked for your definition of thinking.
      The OP talked about belief, then made a statement using a word that is not precisely defined.
      If you think computation is thinking then by your definition the LLM is thinking.
      But that’s your definition of thinking.