14
submitted 2 weeks ago by [email protected] to c/[email protected]

Current situation: I've got a desktop with 16 GB of DDR4 RAM, a 1st gen Ryzen CPU from 2017, and an AMD RX 6800 XT GPU with 16 GB VRAM. I can 7 - 13b models extremely quickly using ollama with ROCm (19+ tokens/sec). I can run Beyonder 4x7b Q6 at around 3 tokens/second.

I want to get to a point where I can run Mixtral 8x7b at Q4 quant at an acceptable token speed (5+/sec). I can run Mixtral Q3 quant at about 2 to 3 tokens per second. Q4 takes an hour to load, and assuming I don't run out of memory, it also runs at about 2 tokens per second.

What's the easiest/cheapest way to get my system to be able to run the higher quants of Mixtral effectively? I know that I need more RAM Another 16 GB should help. Should I upgrade the CPU?

As an aside, I also have an older Nvidia GTX 970 lying around that I might be able to stick in the machine. Not sure if ollama can split across different brand GPUs yet, but I know this capability is in llama.cpp now.

Thanks for any pointers!

top 5 comments
sorted by: hot top controversial new old
[-] [email protected] 6 points 2 weeks ago* (last edited 2 weeks ago)

I hate to bear the bad news, but as long as the model is too large to fit entirely in VRAM getting 5 t/s on a 8x7b is going to be difficult. You can throw another 16gb RAM in the system which could help with caching and context length, but since the model is still having to juggle data in and out of VRAM the speeds will remain low.

I wouldn't upgrade the CPU personally, focus on adding beefier GPU. And it's probably not worth adding the 970 to the mix, the 4GB isn't providing much room and will likely slow down the 6800 XT more.

[-] [email protected] 5 points 2 weeks ago

Ollama doesn't currently support mixing CUDA & ROCm. https://github.com/ollama/ollama/issues/3723#issuecomment-2071134571

One thing to keep in mind about adding RAM your speed could drop depending on how many slots you populate. For me, I have a 5700G and with 2x16Gb, it runs at 3200Mhz, but with 4x16Gb(same exact product), it only runs at 1800Mhz. In my case, RAM speed has a huge effect on tokens/sec, if I have a model that has to use some RAM.

You can check AMD's spec page for your processor, but they don't really document a lot of this stuff.

[-] [email protected] 2 points 2 weeks ago

Good call out on not mixing CUDA and ROCm, I wasn't aware of this

[-] [email protected] 1 points 2 weeks ago

Yep, I had been hoping for the same thing.

Also, to @[email protected], you might want to wait and see what gets announced at Computex next month. Hopefully they announce some new stuff and the current gen prices drop.

[-] [email protected] 4 points 2 weeks ago* (last edited 2 weeks ago)

I don't know how important the CPU is for those workloads tbh. but I feel like not as important so maybe you're fine leaving it as it is.

I think AMD wanted to release a new GPU lineup (radeon 8000 series) sometime this year/early next year. Maybe just wait for that, sell your old card on the used market and buy a new one?

(And throw in 16G of RAM as you said)

this post was submitted on 16 May 2024
14 points (100.0% liked)

LocalLLaMA

2035 readers
3 users here now

Community to discuss about LLaMA, the large language model created by Meta AI.

This is intended to be a replacement for r/LocalLLaMA on Reddit.

founded 1 year ago
MODERATORS