22
submitted 1 year ago by [email protected] to c/[email protected]

Is it just memory bandwidth? Or is it that AMD is not well supported by pytorch well enough for most products? Or some combination of those?

you are viewing a single comment's thread
view the rest of the comments
[-] [email protected] 8 points 1 year ago

The memory bandwidth stinks compared to a discrete gpu. That’s the reason. It’s still possible.

[-] [email protected] 1 points 1 year ago

The question is, though, would it be better than just a CPU with lots of RAM?

[-] [email protected] 3 points 1 year ago

Yes, it seems so according to this person’s testing: https://youtu.be/HPO7fu7Vyw4

this post was submitted on 25 Aug 2023
22 points (100.0% liked)

LocalLLaMA

2210 readers
2 users here now

Community to discuss about LLaMA, the large language model created by Meta AI.

This is intended to be a replacement for r/LocalLLaMA on Reddit.

founded 1 year ago
MODERATORS