this post was submitted on 31 Mar 2024
17 points (100.0% liked)
LocalLLaMA
2236 readers
2 users here now
Community to discuss about LLaMA, the large language model created by Meta AI.
This is intended to be a replacement for r/LocalLLaMA on Reddit.
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
You're probably going to run into the problem that people didn't anticipate your strategy if you try to run a model on a GPU with way more memory than the host system. I'm not sure many execution frameworks can go straight from disk to GPU RAM. Also, storage speed for loading the model might be an issue on an SOC that boots off e.g. an SD card.
An eGPU dock should do CUDA just as well as an internal GPU, as far as I know. But you would need the drivers installed.