23
Hello World (sh.itjust.works)

Hi, you've found this ~~subreddit~~ Community, welcome!

This Community is intended to be a replacement for r/LocalLLaMA, because I think that we need to move beyond centralized Reddit in general (although obviously also the API thing).

I will moderate this Community for now, but if you want to help, you are very welcome, just contact me!

I will mirror or rewrite posts from r/LocalLLama for this Community for now, but maybe we could eventually all move to this Community (or any Community on Lemmy, seriously, I don't care about being mod or "owning" it).

top 16 comments
sorted by: hot top controversial new old
[-] scrollbars@lemmy.ml 5 points 1 year ago

Hello! This is the one community that I was a bit worried about finding an equivalent of outside of reddit. Hopefully more of us migrate over.

[-] hendrik@lemmy.ml 3 points 1 year ago

thank you for using a decent platform. i doubt more than 20 people will migrate from reddit... but it make the world a better place, anyways.

[-] dirac_field@lemmy.one 3 points 1 year ago

Late to the party, but thanks for setting this up! I suspect the overlap of people both using local LLMs and hungry for reddit alternatives will be higher than average

[-] mellery@lemmy.one 3 points 1 year ago

Hello! Thanks for setting this up

[-] Barbarian@sh.itjust.works 1 points 1 year ago* (last edited 1 year ago)
[-] pax@sh.itjust.works 0 points 1 year ago

I could help with moderation, but I have a question, how to set up LLAma on my mac computer? any tips?

[-] dtlnx@beehaw.org 1 points 1 year ago
[-] pax@sh.itjust.works 1 points 1 year ago

gpt4all is dump, it even didn't tried to be smart.

[-] SkySyrup@sh.itjust.works 1 points 1 year ago* (last edited 1 year ago)

Hi, sure, thank you so much for helping out! As for LLaMA, I would point you at llama.cpp, (https://github.com/ggerganov/llama.cpp) which is the absolute bleeding edge, but also has pretty useful instructions on the page (https://github.com/ggerganov/llama.cpp#usage). You could also use Kobold.cpp, but I don't have any experience with it, so I can't help you if you have issues.

[-] gh0stcassette@lemmy.world 2 points 1 year ago

Adding to this: text-generation-webui (https://github.com/oobabooga/text-generation-webui) works with the latest bleeding edge llama.cpp via llama-cpp-python, and it has a nice graphical front-end. You do have a manually tell pip to install llama.cpp-python with the right compiler flags to get GPU acceleration working but the llama-cpp-python github and ooba github explain how to do this.

You can even set up GPU acceleration through metal on m1 Macs I've seen some fucking INSANE performance numbers online for the higher RAM MacBook pros (20+ tokens/sec, I think with a 33b model, but it might have been 13b, either way, impressive.)

[-] pax@sh.itjust.works 0 points 1 year ago

llama cpp is crashy on my computer, it even didn't compiled.

[-] SkySyrup@sh.itjust.works 0 points 1 year ago

Huh, that's interesting. If llama.cpp doesn't work, try https://github.com/oobabooga/text-generation-webui which (tries to) provides a user-friendly(-ier) experience.

[-] pax@sh.itjust.works 0 points 1 year ago

it launches just fine, but when loading a model it says something like: successfully loaded none

[-] SkySyrup@sh.itjust.works 0 points 1 year ago

Have you put your model in the "models" folder in the "text-generation-webui" folder? If you have, then navigate over to the "Model" section (button for the menu should be at the top of the page) and select your model using the box below the menu.

[-] pax@sh.itjust.works 0 points 1 year ago

I tried to download an example one, cus I don't have any model, failed.

this post was submitted on 08 Jun 2023
23 points (100.0% liked)

LocalLLaMA

2237 readers
5 users here now

Community to discuss about LLaMA, the large language model created by Meta AI.

This is intended to be a replacement for r/LocalLLaMA on Reddit.

founded 1 year ago
MODERATORS