10

So what is currently the best and easiest way to use an AMD GPU for reference I own a rx6700xt and wanted to run 13B model maybe superhot but I'm not sure if my vram is enough for that Since now I always sticked with llamacpp since it's quiet easy to setup Does anyone have any suggestion?

you are viewing a single comment's thread
view the rest of the comments
[-] saplingtree@kbin.social 1 points 1 year ago

Just pay nvidia their ill-earned ounce of flesh. I say this as a strong AMD advocate.

It's clear that AMD isn't serious about the AI market. They had years to provide a proper competitor to CUDA or at the very least a 1:1 compatibility layer. Instead of doing either of these things, AMD continued messing with half-assed projects like ROCm and the other one the name of which I don't care to look up. AMD has the resources to build a CUDA compatible API in under 6 months but for some reason they don't. I don't know why they don't, and at this point I don't really care.

Buy an AMD GPU for AI at your own risk.

this post was submitted on 02 Jul 2023
10 points (100.0% liked)

LocalLLaMA

2237 readers
5 users here now

Community to discuss about LLaMA, the large language model created by Meta AI.

This is intended to be a replacement for r/LocalLLaMA on Reddit.

founded 1 year ago
MODERATORS