980
you are viewing a single comment's thread
view the rest of the comments
[-] AtHeartEngineer@lemmy.world 6 points 6 months ago

I haven't seen a way to do that that doesn't wreck the model

[-] Speculater@lemmy.world 6 points 6 months ago

Kccp, hugging face, grab a model that fits your vram in gguf format. I think two clicks after downloaded.

[-] AtHeartEngineer@lemmy.world 8 points 6 months ago

I know how to download and run models, what I'm saying is, all the "uncensored" deepseek models are abliterated and perform worse

[-] Shezzagrad@lemmy.ml 1 points 6 months ago

It's the same model, your pc just sucks lmfao

[-] AtHeartEngineer@lemmy.world 8 points 6 months ago

I'm not talking about the speed I'm talking about the quality of output. I don't think you understand how these models are transformed into "uncensored models" but a lot of the time using abliteration messed then up.

[-] Shezzagrad@lemmy.ml 1 points 6 months ago

Buddy I have a running and been testing 7b and 14b compared to the cloud deepseek. Any sources, any evidence to back what you're saying? Or just removed and complaining?

[-] AtHeartEngineer@lemmy.world 3 points 6 months ago

I'm not talking about the cloud version at all. I'm talking about the 32b and 14b models vs ones people have "uncensored".

I was hoping someone knew of an "uncensored" version of deepseek that was good, that could run locally, because I haven't seen one.

I don't know what you mean by "removed".

[-] 474D@lemmy.world 2 points 6 months ago

You can do it in LM Studio in like 5 clicks, I'm currently using it.

[-] AtHeartEngineer@lemmy.world 4 points 6 months ago

Running an uncensored deepseek model that doesn't perform significantly worse than the regular deepseek models? I know how to dl and run models, I haven't seen an uncensored deepseek model that performs as well as the baseline deepseek model

[-] 474D@lemmy.world 1 points 6 months ago

I mean obviously you need to run a lower parameter model locally, that's not a fault of the model, it's just not having the same computational power

[-] AtHeartEngineer@lemmy.world 2 points 6 months ago

In both cases I was talking about local models, deepseek-r1 32b parameter vs an equivalent that is uncensored from hugging face

this post was submitted on 01 Feb 2025
980 points (100.0% liked)

Political Memes

1785 readers
2 users here now

Non political memes: !memes@sopuli.xyz

founded 2 years ago
MODERATORS