14
Easy to setup locally hosted LLM with access to file system
(programming.dev)
Welcome to Free Open-Source Artificial Intelligence!
We are a community dedicated to forwarding the availability and access to:
Free Open Source Artificial Intelligence (F.O.S.A.I.)
What's your budget?
Zero, as said I'd prefer to self host.
What hardware do you have available then?
Just a 1080, though it handles just fine with 7b models, could also work with a 14b probably.
With sincere honesty i doubt a 7B model will grant you much coherent/usefull results. 14b won’t either.
I can run deepseek 30b on a 4070ti super and i am very not impressed. I can do more but its too slow. 14b is optimal speed size balance.
I am used to clause opus pro though which is one of the best.
You are 100% allowed to proof me wrong. In fact i hope you do and build something small and brilliant but i personally recommend adjusting expectations and upgrading that card.
Do you think a 24GB card like the 7900 XTX could run Mistral Small? TBH that card is nowhere to be found right now