- cross-posted to:
- linux@lemmy.ml
- cross-posted to:
- linux@lemmy.ml
I need to catch up on training. I need an LLM that I can train on all my ebooks and digitized music, and can answer questions “what’s that book where the girl goes to the thing and does that deed?”
You probably could use RAG for this instead of actually training a model.
Smart people, I beg of thee, explain! What can it do?
Edit: looks to be another text based one, not image generation right?
It’s language only, hence, LM
To be fair, I didn’t know if that language included programming language, and thus maybe still consider image based AI to be included in LLM. Is there a different designation for the type of AI that does image generation?
I see all these graphs about how much better this LLM is than another, but do those graphs actually translate to real world usefulness?
I have yet to see a 3B model that’s not dumb.
The problem is… How do we run it if rocm is still a mess for most of their gpus? Cpu time?
There are ROCm versions of llama.cpp, ollama, and kobold.cpp that work well, although they’ll have to add support for this model before they could run it.