this post was submitted on 04 Jan 2024
25 points (100.0% liked)
LocalLLaMA
2251 readers
1 users here now
Community to discuss about LLaMA, the large language model created by Meta AI.
This is intended to be a replacement for r/LocalLLaMA on Reddit.
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
The 3060 is a nice cheap one for running okay sized models, but if you can find a way to stretch for a 3090 or a 7900 XTX you'll be able to run these 33B models with decent quant levels
I was hoping to avoid Nvidia's binary drivers although I don't know what the driver/support status of dedicated AI accelerators are like on Linux._
I run my Nvidia stuff in containers to not have to deal with all the stupid shenanigans