46
Advice - Getting started with LLMs
(beehaw.org)
Rumors, happenings, and innovations in the technology sphere. If it's technological news or discussion of technology, it probably belongs here.
Subcommunities on Beehaw:
This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.
I managed to get ollama running through Docker easily. It's by far the least painful of the options I tried, and I just make requests to the API it exposes. You can also give it GPU resources through Docker if you want to, and there's a CLI tool for a quick chat interface if you want to play with that. I can get LLAMA 3 (8B) running on my 3070 without issues.
Training a LLM is very difficult and expensive. I don't think it's a good place for anyone to start. Many of the popular models (LLAMA, GPT, etc) are astronomically expensive to train and require and ungodly number of resources.
yep, definitely agree with all of this.