9
Code Llama 70b on text-generation-webui?
(lemmy.basedcount.com)
If you're using text generation webui there's a bug where if your max new tokens is equal to your prompt truncation length it will remove all input and therefore just generate nonsense since there's no prompt
Reduce your max new tokens and your prompt should actually get passed to the backend. This is more noticable in models with only 4k context (since a lot of people default max new tokens to 4k)
I’m just using Ollama with Ollama WebUI. You’ll have to use the right tag when installing Llama to make sure you get 70b.
Welcome to Free Open-Source Artificial Intelligence!
We are a community dedicated to forwarding the availability and access to:
Free Open Source Artificial Intelligence (F.O.S.A.I.)