This is luddite behavior. AI can be a very valuable tool for learning unless you’re working with some truly exotic shit (in which case pray to RAG to save you)
Nah AI can be extremely useful for learning technologies. You just need to be careful to verify they aren’t bullshitting you.
For example find an explanation of PPM compression that is concrete and simple. As far as I can tell it doesn’t exist.
But I could ask ChatGPT and it told me how it works (probably) in just a few seconds. I haven’t verified yet (at a BBQ) whether it is the correct algorithm but it’s certainly a plausible one that would work.
It told me that you use a trie (typically) of symbol prefixes to record the probability of the following symbols, so for example you know that for the prefix “Th” the probability of “e” is 90%. Then you encode the symbol with arithmetic coding using the modelled probabilities. Apparently the typical max context length is 4-6.
That would have taken me hours to find by reading code and ancient papers but I can verify it a lot quicker.
Nailed it. I’ll ask for API docs references when I’m asking ChatGPT about a code snippet, so I can go in and maybe try functionality in a sandbox before it even touches my dev code.
"I regularly use AI to learn new technologies, discuss methods and techniques, review code, etc. "
Ew!
This is luddite behavior. AI can be a very valuable tool for learning unless you’re working with some truly exotic shit (in which case pray to RAG to save you)
Nah AI can be extremely useful for learning technologies. You just need to be careful to verify they aren’t bullshitting you.
For example find an explanation of PPM compression that is concrete and simple. As far as I can tell it doesn’t exist.
But I could ask ChatGPT and it told me how it works (probably) in just a few seconds. I haven’t verified yet (at a BBQ) whether it is the correct algorithm but it’s certainly a plausible one that would work.
It told me that you use a trie (typically) of symbol prefixes to record the probability of the following symbols, so for example you know that for the prefix “Th” the probability of “e” is 90%. Then you encode the symbol with arithmetic coding using the modelled probabilities. Apparently the typical max context length is 4-6.
That would have taken me hours to find by reading code and ancient papers but I can verify it a lot quicker.
Nailed it. I’ll ask for API docs references when I’m asking ChatGPT about a code snippet, so I can go in and maybe try functionality in a sandbox before it even touches my dev code.
Using the term “discuss” is just creepy. It’s a piece of software. Do people actually think they’re conversing when they use an LLM?
what are you on about, it’s literally a chatbot.
They’re fanatics