OC below by @HaraldvonBlauzahn@feddit.org

What called my attention is that assessments of AI are becoming polarized and somewhat a matter of belief.

Some people firmly believe LLMs are helpful. But programming is a logical task and LLMs can’t think - only generate statistically plausible patterns.

The author of the article explains that this creates the same psychological hazards like astrology or tarot cards, psychological traps that have been exploited by psychics for centuries - and even very intelligent people can fall prey to these.

Finally what should cause alarm is that on top that LLMs can’t think, but people behave as if they do, there is no objective scientifically sound examination whether AI models can create any working software faster. Given that there are multi-billion dollar investments, and there was more than enough time to carry through controlled experiments, this should raise loud alarm bells.

  • Kuinox@lemmy.world
    link
    fedilink
    arrow-up
    8
    arrow-down
    18
    ·
    edit-2
    2 days ago

    What called my attention is that assessments of AI are becoming polarized and somewhat a matter of belief.

    Proceed to write a belief as a statement in the following paragraph

    If you think LLMs doesnt think (I won’t argue that they arent extremely dumb), please define what is thinking, before continuing, and if your definition of thinking doesn’t apply to humans, we won’t be able to agree.

      • Kuinox@lemmy.world
        link
        fedilink
        arrow-up
        8
        arrow-down
        13
        ·
        edit-2
        2 days ago

        I asked for your definition, I cannot prove something if we do not agree on a definition first.
        You also missread what I said, I did not said AI were thinking.
        The burden of proof is on the one who made an affirmation.
        I’m not the one who made an affirmation which field experts doesn’t know the answer.
        But depending of your definition of thinking, some can be answered.

        • technocrit@lemmy.dbzer0.com
          link
          fedilink
          arrow-up
          2
          arrow-down
          2
          ·
          edit-2
          2 days ago

          I don’t think y’all are disagreeing but maybe this sentence is somewhat confusing:

          If you think LLMs doesnt think (I won’t argue that they arent extremely dumb), please define what is thinking,

          Maybe the “doesnt” shouldn’t be there.

          • Kuinox@lemmy.world
            link
            fedilink
            arrow-up
            3
            arrow-down
            8
            ·
            edit-2
            2 days ago

            No it is here because that’s what they claim.
            Nobody yet know how it work, we don’t know how LLMs process information.
            Anyone who claim it really think, or it isn’t thinking, is believing, this is not something the current ML field know.

            • Saledovil@sh.itjust.works
              link
              fedilink
              arrow-up
              6
              arrow-down
              1
              ·
              2 days ago

              Well, the neural network is given a prefix (series of tokens) and a token, and it spits out how likely is it that the token follows the prefix. Text is generated by calculating this probability for all known tokens, then picking one random, weighted based on the calculated probabilities.

              • Kuinox@lemmy.world
                link
                fedilink
                arrow-up
                3
                arrow-down
                4
                ·
                2 days ago

                And the brain is made out of neurons that sends electric signals between them and operate muscles.
                That doesnt explain how the brain think.

                • Saledovil@sh.itjust.works
                  link
                  fedilink
                  arrow-up
                  4
                  ·
                  1 day ago

                  It allows us to conclude that an LLM doesn’t “think” about what it is saying. Based on the mechanics, the LLM doesn’t even know it’s a participant in the conversation.

    • WhirlpoolBrewer@lemmings.world
      link
      fedilink
      arrow-up
      6
      ·
      2 days ago

      I don’t think the current common implementation of AI systems are “thinking” and I’ll base my argument on Oxford’s definitions of words. Thinking is defined as “the process of using one’s mind to consider or reason about something”. I’ll ignore the word “mind” and focus on the word “reason”. I don’t think what AIs are doing counts as reasoning as defined by Oxford. Let’s go to that definition: “the power of the mind to think, understand, and form judgments by a process of logic”. I take issue with the assertion that they form judgments. For completeness, but I don’t think it’s definition is particularly relevant here, a judgment is: “the ability to make considered decisions or come to sensible conclusions”.

      I think when you ask an LLM how many 'r’s there are in Strawberry and questions along this line you can see they can’t form judgments. These basic but obscure questions are where you see that the ability to form judgements isn’t there. I would also add that if you “form judgments” you probably don’t need to be reminded you formed a judgment immediately after forming one. Like if I ask an LLM a question, and it provides an answer, I can convince it that it was wrong whether or not I’m making junk up or not. I can tell it it made a mistake and it will blindly change it’s answer whether it made a mistake or not. That also doesn’t feel like it’s able to reason or make judgments.

      This is where all the hype falls flat for me. It feels like sometimes it looks like a concrete wall, but occasionally that concrete wall is made of wet paper. You can see how impressive the tool is and how paper thin it is at the same time. It’s cool, it’s useful, it’s fake, and that’s ok. Just be aware of what the tool is.

      • Kuinox@lemmy.world
        link
        fedilink
        arrow-up
        3
        arrow-down
        3
        ·
        2 days ago

        I think when you ask an LLM how many 'r’s there are in Strawberry and questions along this line you can see they can’t form judgments.

        Like a LLMs you are making the wrong affirmation based lacking knowledge.
        Current LLMs input, and output tokens, they dont ever see the individual letters, they see tokens, for straberry, they see 3 tokens:

        They dont have any information on what characters are in this tokens. So they come up with something. If you learned a language only by speaking, you’ll be unable to write it down correctly (except purely phonetical systems), instead you’ll come up with what you think the word should be written.

        I would also add that if you “form judgments” you probably don’t need to be reminded you formed a judgment immediately after forming one.

        You come up with the judgment before you are aware of it: https://www.unsw.edu.au/newsroom/news/2019/03/our-brains-reveal-our-choices-before-were-even-aware-of-them--st

        can tell it it made a mistake and it will blindly change it’s answer whether it made a mistake or not. That also doesn’t feel like it’s able to reason or make judgments.

        That’s also how the brain can works, it come up with a plausible explanation after having the result.
        See the experience which are spoken about here: https://www.youtube.com/watch?v=wfYbgdo8e-8

        I showed the same behavior in humans of some behavior you observed in LLMs, does this means that by your definition, humans doesnt think ?

        • WhirlpoolBrewer@lemmings.world
          link
          fedilink
          arrow-up
          7
          ·
          2 days ago

          If the LLM could reason, shouldn’t it be able to say “my token training prevents me from understanding the question as asked. I don’t know how many 'r’s there are in Strawberry, and I don’t have a means of finding that answer”? Or at least something similar right? If I asked you what some word in a language you didn’t know, you should be able to say “I don’t know that word or language”. You may be able to give me all sorts of reasons why you don’t know it, and that’s all fine. But you would be aware that you don’t know and would be able to say “I don’t know”.

          If I understand you correctly, you’re saying the LLM gets it wrong because it doesn’t know or understand that words are built from letters because all it knows are tokens. I’m saying that’s fine, but it should be able to reason that it doesn’t know the answer, and say that. I assert that it doesn’t know that it doesn’t know what letters are, because it is incapable of coming to that judgement about its own knowledge and limitations.

          Being able to say what you know and what you don’t know are critical to being able to solve logic problems. Knowing which information is missing and can be derived from known things, and which cannot be derived is key to problem solving based on reason. I still assert that LLMs cannot reason.

          • Kuinox@lemmy.world
            link
            fedilink
            arrow-up
            1
            arrow-down
            2
            ·
            edit-2
            2 days ago

            I’m saying that’s fine, but it should be able to reason that it doesn’t know the answer, and say that.

            That is of course a big problem. They try to guess too much stuff, but it’s also why it kinda works. Symbolics AI have the opposite problem, they are rarely useful, because they can’t guess stuff, they are rooted in hard logic, and cannot come up with a reasonable guess.
            Now humans also try to guess stuff and sometimes get it wrong, it’s required in order to produce results from our thinking and not be stuck in a state where we don’t have enough data to do anything, like a symbolic AI.

            Now, this is becoming a spectrum, humans are somewhere in the middle of LLMs and symbolics AI.
            LLMs are not completely unable to say what they know and doesnt know, they are just extremely bad at it from our POV.

            The probleme with “does it think” is that it doesn’t give any quantity or quality.

            • WhirlpoolBrewer@lemmings.world
              link
              fedilink
              arrow-up
              1
              ·
              17 hours ago

              Is the argument that LLMs are thinking because they make guesses when they don’t know things combined with no provided quantity or quality to describe thinking?

              If so, I would suggest that the word “guessing” is doing a lot of heavy lifting here. The real question would be “is statistics guessing”? I would say guessing and statistics are not the same thing, and Oxford would agree. An LLM just grabs tokens based on training data on what word or token most likely comes next, it will just be using what the statistically most likely next token or word is. I don’t think grabbing the highest likely next token counts as guessing. That feels very algorithmic and statistical to me. It is also possible I’m missing the argument still.

              • Kuinox@lemmy.world
                link
                fedilink
                arrow-up
                2
                ·
                17 hours ago

                Is the argument that LLMs are thinking because they make guesses

                No, it’s that you can’t root the argument that they don’t think over the fact they make stuff up, because humans too. You could root it in the amount of things it guess wrong, but it’s extremely hard to measure.
                Again, I’m not claiming that they think, but that we don’t know until one or the other is proven.
                Right now, thinking one, or the other is true, is belief.

                • WhirlpoolBrewer@lemmings.world
                  link
                  fedilink
                  arrow-up
                  1
                  ·
                  17 hours ago

                  I think you can make a strong argument that they don’t think rooted in words should mean something and that statistics and thinking don’t mean the same thing. To me, that feels like a fairly valid argument.

                  • Kuinox@lemmy.world
                    link
                    fedilink
                    arrow-up
                    1
                    ·
                    edit-2
                    16 hours ago

                    So you think you need words to be able to think ? Monkeys, birds, human babies are unable to think then ?

    • 6nk06@sh.itjust.works
      link
      fedilink
      arrow-up
      14
      arrow-down
      4
      ·
      2 days ago

      Since LLMs runs on CPUs with a lot of memory, do you agree that my calculator is thinking?

      • FizzyOrange@programming.dev
        link
        fedilink
        arrow-up
        3
        arrow-down
        1
        ·
        1 day ago

        This argument makes no more sense than trying to say that a plant is thinking because brains are made of cells and so are plants.

      • Kuinox@lemmy.world
        link
        fedilink
        arrow-up
        1
        arrow-down
        7
        ·
        edit-2
        2 days ago

        You think computation is thinking ?
        I asked for your definition of thinking.
        The OP talked about belief, then made a statement using a word that is not precisely defined.
        If you think computation is thinking then by your definition the LLM is thinking.
        But that’s your definition of thinking.

    • zogrewaste_@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      2
      ·
      2 days ago

      ’ Please succinctly answer a question of philosophy that has plagued mankind for thousands of years. can’t? <crosses arms with a superior smirk> I win’

      • Kuinox@lemmy.world
        link
        fedilink
        arrow-up
        3
        arrow-down
        2
        ·
        2 days ago

        Claiming LLMs can’t think with the current informations available, and calling that not a belief, is claiming to have a response to this philosophy question.
        The only sensible answer is saying you don’t know, or being aware and communicating that your statement is a belief.