surprising performance, thx!
I get ~37t/s in tg e ~60t/s in pp (less than 2gb of ram and 7.2gb of vram on startup) on my pc (5800x, 32gb ddr4, rtx3080ti 12gb) with this settings:
llama-server \
--model NVIDIA-Nemotron-3-Nano-30B-A3B-MXFP4_MOE.gguf \
--ctx-size 524288 \
--batch-size 4096 \
--temp 1.0 \
--top-p 1.0 \
--top-k 20 \
--repeat-penalty 1.05 \
--gpu-layers 99 \
--threads 12 \
--threads-batch 16 \
--cpu-moe \
--flash-attn on \
$*
with --reasoning-budget 0 for coding with opencode is pretty fast.
It's the fastest model on this size that I have used so far on my limited hardware.
Thank you very much for your effort, very appreciated!
Thanks for the kind words, but I just quantize the model, it's nothing special. All the credit goes to the work of the model creators for making such a good model.
Also, you disable the thinking? Won't that make it worse for opencode?
Also, from the following guide: https://unsloth.ai/docs/models/nemotron-3
they say that NVIDIA recommends these settings for inference:
General chat/instruction (default):
temperature = 1.0
top_p = 1.0
Tool calling use-cases:
temperature = 0.6
top_p = 0.95
Have you tried these settings for tools in opencode? maybe it will be better
I use thinking for reasoning on the project and preparing documents, often with the llama web interface if i want something fast or anythingllm or jan (i'm still searching for better alternatives).
On opencode I just do "do this" "do that" but not the planning/reasoning because it often lead to very large sessions and i see a decay in speed when i go past 150k tokens in 1h of session; disabling the reasoning reduce the context a lot (let's say 40-50k in 1h).
The alternative is doing a lot of new sessions.
For the parameters I took them in an example on the HF page of the nvidia model, I didn't experiment a lot with them since the quality of the responses is good.
I started doing this stuff like 10 days ago, so i'm still learning.
Wow, with 1M context and full offloading to the rtx 3090 it thinks and shits decent code at 150 t/s.
llama-server -m models/NVIDIA-Nemotron-3-Nano-30B-A3B-MXFP4_MOE.gguf
-ctk q8_0 -ctv q4_0 --ctx-size 1048576 --mlock -tb 1 --jinja --temp 0.6 --top-p 0.95 --fit on