22
submitted 1 year ago by [email protected] to c/[email protected]

Is it just memory bandwidth? Or is it that AMD is not well supported by pytorch well enough for most products? Or some combination of those?

you are viewing a single comment's thread
view the rest of the comments
[-] [email protected] 3 points 1 year ago

The instruction support for ARM built into llama.cpp is weak compared to x86.

I don't know about you but my M1 Pro is a hellovalot faster than my 5800x in llama.cpp.

These CPUs benchmark similarly across a wide range of other tasks.

this post was submitted on 25 Aug 2023
22 points (100.0% liked)

LocalLLaMA

2210 readers
2 users here now

Community to discuss about LLaMA, the large language model created by Meta AI.

This is intended to be a replacement for r/LocalLLaMA on Reddit.

founded 1 year ago
MODERATORS