What is your GPU? To be blunt, there is no Arc card with 20GB of VRAM, so that may actually be your IGP.
B580+a750. They do work together.
Oh yeah, presumably through SYCL or Vulcan splitting.
Id try Qwen3 30B, maybe a custom quantization if it doesn’t quite fit in your vram pool (as it should be very close). It should be very fast and quite smart.
Qwen3 32B would fit too (a fully dense model), but you would definitely need to tweak the settings without it being really slow.
Qwen3 also doesn't work because I'm using the ipex llm docker container which has ollama 5.8 or something. It doesn't matter now because I have taken the test I was practicing for since posting this. Playing with qwen3 on CPU, it seems good but the reasoning feels like most open reasoning models where it gets the right answer then goes "wait that's not right..."
Yeah it does that, heh.
The Qwen team recommend a fairly high temperature, but I find it's better with modified sampling (lower temperature, 0.1 MinP, a bit of rep penalty or DRY). Then it tends to not "second guess" itself and take the lower probability choice of continuing to reason.
If you're looking for alternatives, Koboldcpp does support Vulkan. It may not be as fast as the (SYCL?) docker container, but supports new models and more features. It's also precompiled as a one click exe: https://github.com/LostRuins/koboldcpp
I don't have any good recommendations. I just upload such one-off requests to AIstudio, ChatGPT and the like. But keep in mind AI isn't perfect at math. They sure make a lot of mistakes with my assignments. I don't know what level your maths test was, AI does an acceptable job at elementary school maths. With higher level maths, it'll give both correct and wrong results by chance. Might be good enough, I don't really know.
I'd recommend Wolfram Alpha. That's not local, nor is it AI. But it solves equations, calculates and transforms and draws graphs with precision and there isn't any guessing involved.
Assuming you’re using ollama (is there another reason to use ollama.com?), you can use compatible files from huggingface directly in ollama. The model page will give you the instructions for the command to run; I always change ollama run
to ollama pull
, though. Instructions: https://huggingface.co/docs/hub/ollama
You should be able to fit Qwen3 32B at Q4_K_M
with an acceptable context, and it did very well on math benchmarks (with thinking enabled). You can disable thinking by including /no_think
at the end of your prompt to speed up responses, but I’m not sure how well it handles math under those circumstances. I wouldn’t even consider disabling thinking unless you were grading one question per prompt.
The ollama Qwen3 page is https://ollama.com/library/qwen3:32b and the default 32B quant is Q4_K_M
. I personally am using the Q6_K
quant by unsloth, and their quants have been great (when supported by ollama), often being the first to fix bugs impacting other quantizations.
I’m not sure if Q4_K_M
is the optimal quant style for Intel Arc, but the others that might be better are not supported by ollama, anyway, as far as I know.
Qwen3’s real world knowledge is bad, so if there are questions that rely on that you may need to include the relevant facts as part of the prompt or use an ollama frontend that supports web searches.
Other options: This does seem like something Gemma3 27B would be good at, so it’s too bad you can’t use it. Older Gemmas may be good, but I’m not sure. Llama3.3 70B is also out, unless you have a decent amount of system RAM and are okay with offloading less than half to GPU. I could see it outperforming my recommendation below but I would be very surprised for the 8B version to outperform it. Older Qwen2.5 is decent at math but unless you grab QwQ doesn’t include thinking.
Unfortunately I can't run qwen3 with intel either. I'm just doing gemma3:12b on CPU for now. I might try qwq as I think it runs on older ollama versions.
If you were down to use hugging face DeepHeremes is a reasoning model built on top of Mistral Small 24b. It'd fit decently well in 20GB.
Maybe the ollama run hf.co/{username}/{repository}
command would make it easy enough for you.
Reasoning models usually are better for math.
LocalLLaMA
Welcome to LocalLLaMA! Here we discuss running and developing machine learning models at home. Lets explore cutting edge open source neural network technology together.
Get support from the community! Ask questions, share prompts, discuss benchmarks, get hyped at the latest and greatest model releases! Enjoy talking about our awesome hobby.
As ambassadors of the self-hosting machine learning community, we strive to support each other and share our enthusiasm in a positive constructive way.