• 0 Posts
  • 17 Comments
Joined 2 years ago
cake
Cake day: June 16th, 2023

help-circle

  • I’m saying that’s fine, but it should be able to reason that it doesn’t know the answer, and say that.

    That is of course a big problem. They try to guess too much stuff, but it’s also why it kinda works. Symbolics AI have the opposite problem, they are rarely useful, because they can’t guess stuff, they are rooted in hard logic, and cannot come up with a reasonable guess.
    Now humans also try to guess stuff and sometimes get it wrong, it’s required in order to produce results from our thinking and not be stuck in a state where we don’t have enough data to do anything, like a symbolic AI.

    Now, this is becoming a spectrum, humans are somewhere in the middle of LLMs and symbolics AI.
    LLMs are not completely unable to say what they know and doesnt know, they are just extremely bad at it from our POV.

    The probleme with “does it think” is that it doesn’t give any quantity or quality.


  • I think when you ask an LLM how many 'r’s there are in Strawberry and questions along this line you can see they can’t form judgments.

    Like a LLMs you are making the wrong affirmation based lacking knowledge.
    Current LLMs input, and output tokens, they dont ever see the individual letters, they see tokens, for straberry, they see 3 tokens:

    They dont have any information on what characters are in this tokens. So they come up with something. If you learned a language only by speaking, you’ll be unable to write it down correctly (except purely phonetical systems), instead you’ll come up with what you think the word should be written.

    I would also add that if you “form judgments” you probably don’t need to be reminded you formed a judgment immediately after forming one.

    You come up with the judgment before you are aware of it: https://www.unsw.edu.au/newsroom/news/2019/03/our-brains-reveal-our-choices-before-were-even-aware-of-them--st

    can tell it it made a mistake and it will blindly change it’s answer whether it made a mistake or not. That also doesn’t feel like it’s able to reason or make judgments.

    That’s also how the brain can works, it come up with a plausible explanation after having the result.
    See the experience which are spoken about here: https://www.youtube.com/watch?v=wfYbgdo8e-8

    I showed the same behavior in humans of some behavior you observed in LLMs, does this means that by your definition, humans doesnt think ?







  • The video very clearly answers this. Like, multiple times.

    No, they made affirmation, that’s not a proof.
    For the first location, they say the loss of water pression AND the sediments are due to the datacenter.
    They are getting their water from a well, if a well runs out, you get more sediments.
    Is this your “clear answers” ?

    We know that it is a tremendous amount of water because we can estimate and we can see the data of towns literally going into extreme droughts right next to data centers.

    If this come from your video again, i again doubt your statements.

    Datacenters dont make water magically disapear, it have to go somewhere.
    You would see a release pipe, so the water is restituted, or vapor cloud, which should be very visible.
    But we dont see any vapor cloud.


  • AI is not the whole cloud, it’s a fraction of the cloud.
    The MIT Press article is from 2022, citing 2019 data. Datacenter tech and heat reuse extremely intensified the last years, so this data is clearly out of date.

    Go explain to these people why “bigger DCs are actually better”:

    Tell me where there is any proof this is meta fault ? Because they are near the datacenter ? Do you have any idea of the amount of water a datacenter consume ?