

I’m saying that’s fine, but it should be able to reason that it doesn’t know the answer, and say that.
That is of course a big problem. They try to guess too much stuff, but it’s also why it kinda works. Symbolics AI have the opposite problem, they are rarely useful, because they can’t guess stuff, they are rooted in hard logic, and cannot come up with a reasonable guess.
Now humans also try to guess stuff and sometimes get it wrong, it’s required in order to produce results from our thinking and not be stuck in a state where we don’t have enough data to do anything, like a symbolic AI.
Now, this is becoming a spectrum, humans are somewhere in the middle of LLMs and symbolics AI.
LLMs are not completely unable to say what they know and doesnt know, they are just extremely bad at it from our POV.
The probleme with “does it think” is that it doesn’t give any quantity or quality.
And the brain is made out of neurons that sends electric signals between them and operate muscles.
That doesnt explain how the brain think.