LLAMA.CPP + ROCm + DFlash on 7900 XTX

#1
by flamme-demon - opened

image

Here are the tests I ran at home on a custom version of llama.cpp (https://github.com/spiritbuun/buun-llama-cpp)

Model : Qwen3.5-27B-Q4_K_M and Qwen3.6-27B-Q4_K_M

Thanks for the feedback! We will continue updating the model weights, it is still at an early stage of training.

What do you expect when will the model weights be stable and as good as 3.5? I had already good results with 3.6 since the update 4hrs ago. I am getting pretty good results on math and json/code long outputs. But on short outputs especially general text I am getting rather bad results in comparison.

── Run 1/2 ──────────────────────────────────────
[Q&A] 256 tokens in 9.02s = 28.3 tok/s (prompt: 23)
[Code] 512 tokens in 15.80s = 32.4 tok/s (prompt: 30)
[JSON] 1024 tokens in 23.22s = 44.0 tok/s (prompt: 48)
[Math] 64 tokens in 1.34s = 47.7 tok/s (prompt: 29)
[LongCode] 2048 tokens in 50.32s = 40.6 tok/s (prompt: 37)

── Run 2/2 ──────────────────────────────────────
[Q&A] 256 tokens in 9.02s = 28.3 tok/s (prompt: 23)
[Code] 512 tokens in 15.78s = 32.4 tok/s (prompt: 30)
[JSON] 1024 tokens in 22.93s = 44.6 tok/s (prompt: 48)
[Math] 64 tokens in 1.34s = 47.7 tok/s (prompt: 29)
[LongCode] 2048 tokens in 50.40s = 40.6 tok/s (prompt: 37)


Running on DGX Spark - Prismaquant 5.5bit in vLLM. Maybe I should try the FP8 version? Anyways it's amazing to get this speed already on the DGX Spark which has 273GB/s bandwidth memory only.

baseline i got 10-15tok/s depening on lenght and format and context depth. So basically on math, json, code 2.5-3x speedup. on general text ~2x speedup.

Thank you for this. Really exciting.

RTX4090 24GB+64GB, same custom version of llama.cpp (https://github.com/spiritbuun/buun-llama-cpp) with command:

exec /home/lifei/code/buun-llama-cpp/build/bin/llama-server \
  --model /media/e/Models/LLM/Qwen3.6-27B-Q5_K_M.gguf \
  --alias "Qwen3.6-27B" \
  -md /media/e/Models/downloads/dflash-draft-3.6-q4_k_m.gguf \
  --no-mmap --no-warmup \
  --image-min-tokens 1024 \
  -np 1 -cd 256 -b 256 -ub 64 \
  --temp 0.6 \
  --top-p 0.95 \
  --top-k 20 \
  --min-p 0.00 \
  --kv-unified \
  --fit on \
  --reasoning off \
  --parallel 1 \
  --flash-attn on \
  --no-context-shift -ngl 99 \
  --jinja \
  -ctk turbo4 -ctv turbo4 \
  -c 65536 \
  --host 0.0.0.0 \
  --spec-type dflash \
  --draft-max 15 \
  --mlock \
  --prio 3

got result:

srv  update_slots:   verify ubatch: 16 tok, 121.8ms (7.61ms/tok)
slot print_timing: id  0 | task 56 |
prompt eval time =     213.87 ms /    17 tokens (   12.58 ms per token,    79.49 tokens per second)
       eval time =  152296.66 ms /   886 tokens (  171.89 ms per token,     5.82 tokens per second)
      total time =  152510.54 ms /   903 tokens
draft acceptance rate = 0.09722 (  525 accepted /  5400 generated)
statistics copyspec: #calls(b,g,a) = 2 404 0, #gen drafts = 0, #acc drafts = 0, #gen tokens = 0, #acc tokens = 0, dur(b,g,a) = 0.372, 0.222, 0.000 ms
statistics dflash: #calls(b,g,a) = 2 404 288, #gen drafts = 404, #acc drafts = 288, #gen tokens = 6060, #acc tokens = 581, dur(b,g,a) = 0.000, 99259.742, 0.037 ms
slot      release: id  0 | task 56 | stop processing: n_tokens = 902, truncated = 0
srv  update_slots: spec cycle (1 slots): draft=310.1ms verify=121.8ms accept=29.6ms other=0.0ms total=461.4ms
srv  update_slots: all slots are idle

what's wrong?

Sign up or log in to comment