Genuinely appreciate this, thank you.
You're right that Lemmy is new territory for me still learning the culture and clearly stepped on some landmines along the way.
On LLMs I'm pretty firm in my position: useful only when you already know what you're doing, only to move faster, and absolutely not a replacement for understanding your own work. And they get it wrong constantly even then which is exactly why you need to be able to read, write, build and debug without them first.
Local models I'm fully on board with in principle. The environmental point is well taken. The problem I keep running into is that for actual coding tasks, the local options that are genuinely good enough still want a GPU setup that costs more than a full datacenter expecially now with the RAM shortage. If you have any recommendations on that front though models, setups, anything that punches above its weight I'm all ears. Seriously.
Thanks for the comment and for sharing the link... BEEP BOOP. :D