OC below by @HaraldvonBlauzahn@feddit.org
What called my attention is that assessments of AI are becoming polarized and somewhat a matter of belief.
Some people firmly believe LLMs are helpful. But programming is a logical task and LLMs can’t think - only generate statistically plausible patterns.
The author of the article explains that this creates the same psychological hazards like astrology or tarot cards, psychological traps that have been exploited by psychics for centuries - and even very intelligent people can fall prey to these.
Finally what should cause alarm is that on top that LLMs can’t think, but people behave as if they do, there is no objective scientifically sound examination whether AI models can create any working software faster. Given that there are multi-billion dollar investments, and there was more than enough time to carry through controlled experiments, this should raise loud alarm bells.
It takes an enormous amount of energy and processing power to create these shitty snapshots so in many ways it is doom considering it will dramatically increase our energy usage.
I get it, you are an AI supporter but you fail to critically analyze it or even understand it. What tool would you use that you can’t correct errors to or even determine how it works. You are really operating on faith here that the black box your getting an answer from is giving you the correct answer.
Perhaps a code snippet works, but after this is where it all falls apart. What if the snippet does not work or causes a problem. The LLM has nothing to offer you here.
Not really.
I self host my own LLM. Energy consumption for queries is lower than gaming according to my own measures. And the models are not made so frequently (I use models made last year still). And once the model is done is infinitely reusable by anyone.
I get that you are starting by the axiom “AI is bad” and then create the arguments needed to support that axiom. Instead of going the other way around with an open mind.
I told you my own personal experience with it. Take it as you want. For me, my situation will be the same. I would keep using same as I use any other tool that works for me, and will stop using it when there’s something better same as I’ve done countless times. I’m not easy to peer pressure into any particular stance, so I can form my own opinions based on what I test for myself. I really think a lot of arguments against AI boil down to some sort of political stance. AI hurt a series of small artists which had a very big voice in some spaces, and thus an anti-AI political movement was created. My own copyleft morals made me undisturbed by this original complains about generative AI, and the rest of arguments have been very unconvincing, straight up fake, logical fallacies, or just didn’t really check out with the reality I was able to test by myself.
For instance I saw other post today saying how 3 watts hour per query was an absolute energy waste for a household. When that’s absolutely nothing compared to the 30.000 watts hour a typical household spend each day, even with quite and amount of queries. Sincerely I spent last few months with one of these devices to measure energy consumption attached to my computer and AI energy usage was really underwhelming compared with what people told me it was going to be. AAA gaming is consistently more energy hungry.
I know you are willfully being ignorant here as AI data centers are projected to use more electricity than the entire nation of Japan by 2030.
Your own hosted LLM is not the problem nor the issue we are even discussing and quite frankly a little insulting you bring it up
I am not anymore anti-AI than any tool that you can’t determine is accurate nor correct if there is an issue with it. LLM have a long way to go before they are even a fraction of what they claim to be.
Another problem is they do not cite where they get their answers from. Without the ability to audit the answers you are given you won’t know how accurate they are.
I have listed several legitimate gripes about LLM. I find your fanboism misplaced and I think you are just playing devil’s advocate at this point. AI is a hype train and I am sick of it already.
I will just copy my other response about datacenters energy usage, ignore the parts not related to our conversation:
Google is not related with chatgpt. Chatgpt parent company is openAI which is a competitor with google.
A more rational explanation is that technology and digital services on general have been growing and are on the rise. Both because more and more complex services are being offered, and more importantly more people are requesting those services. Whole continents that used not to be cover by digital services are now covered. Generative AI is just a very small part of all that.
The best approach to reduce CO2 emissions is to ask for a reduction in human population. From my point of view is the only rational approach, as with a growing population there’s only two solutions, pollute until we die, or reduce quality of life until life is not worth living. Reducing population allows for fewer people to live better loves without destroying the planet.
It also arises the question on why am I responsible if a big tech company decided to make an llm query of every search or overuse the technology, when I am talking about a completely different usage of that technology, that doesn’t even reach a 20-30 queries a day which would have a power usage of less than a few hundreds wh at most, which os negligible in the scheme of global warming and my total energetic footprint.
How it’s being a fanboy saying that “It works for me in some particular cases and not others, it’s a tool that can be used”.
Please, read again this conversation and do a second guess on who is a radical extremist here.
In the case we were talking, writing code, I am the auditor of the answers. I do not ““vibe code”” I read the code that’s proposed, understand it, and if it’s code that I would have written I copy it, if not I change it. “Vibe coding” is an example of bad usage of the tool that would lead to problems. All code not written by yourself and copied from other source should be reviewed. Once it pass my review is as good as my own code. If it fail it would fail the same as any other code witten by me, as it’s something that I was clearly unable to see.
For instance a couple of months ago I wrote a small API service that worked fine at first and suddenly stopped working a few weeks in production. It was a stupid mistake I made, and I needed no LLM to do that mistake. The service was so simple that I didn’t really even used LLM there. But I made a mistake regardless. I could have use AI and get the same bad function that caused the issue. And the blame would still be mine for not seeing the problem.
Once again is a tool. If some jackass decide to vibe code an app and it’s a shit app, is a bad use of the tool. But some other people can de proper reviews and analysis of the generated code and assume full responsibility of any failures of that code.