I mean obviously you need to run a lower parameter model locally, that's not a fault of the model, it's just not having the same computational power
In both cases I was talking about local models, deepseek-r1 32b parameter vs an equivalent that is uncensored from hugging face
Non political memes: !memes@sopuli.xyz
I mean obviously you need to run a lower parameter model locally, that's not a fault of the model, it's just not having the same computational power
In both cases I was talking about local models, deepseek-r1 32b parameter vs an equivalent that is uncensored from hugging face