OC below by @HaraldvonBlauzahn@feddit.org

What called my attention is that assessments of AI are becoming polarized and somewhat a matter of belief.

Some people firmly believe LLMs are helpful. But programming is a logical task and LLMs can’t think - only generate statistically plausible patterns.

The author of the article explains that this creates the same psychological hazards like astrology or tarot cards, psychological traps that have been exploited by psychics for centuries - and even very intelligent people can fall prey to these.

Finally what should cause alarm is that on top that LLMs can’t think, but people behave as if they do, there is no objective scientifically sound examination whether AI models can create any working software faster. Given that there are multi-billion dollar investments, and there was more than enough time to carry through controlled experiments, this should raise loud alarm bells.

  • daniskarma@lemmy.dbzer0.com
    link
    fedilink
    arrow-up
    1
    ·
    edit-2
    1 day ago

    That’s why you meed to know the cavieats of the tool you are using.

    LLM hallucinate. People willing to use them need to know, where is more prone to hallucinate. Which is where the data about the topic you are requesting is more fuzzy. If you ask for the capital of France is highly unlikely you will get an hallucination, if you as for the color of the hair of the second spouse of the fourth president of the third French republic, you probably will get an hallucination.

    And you need to know what are you using it for. If it’s for roleplay, or any not critical matters you may not care about hallucinations. If you use them for important things you need to know that the output needs to be human reviewed before using it. For some things it may be worth the human review as it would be faster that writing from zero, for other instances it may not be worth it and then a LLM should not be used for that task.

    As an example I just was writing some lsp library for an API and I tried the LLM to generate it from the source documentation. I had my doubts as the source documentation is quite bigger that my context size, I tried anyway but I quickly saw that hallucinations were all over the place and hard to fix, so I desisted and I’ve been doing it myself entirely. But before that I did ask the LLM how to even start writing such a thing as it is the first time I’ve done this, and the answer was quite on point, probably saving me several hours searching online trying to find out how to do it.

    It’s all about knowing the tool you are using, same as anything in this world.