Testing IQ3_KS

#3
by shewin - opened

W790E Sage + QYFS + 512G + RTX5090


Computed blk.78.attn_kv_b.weight as 512 x 28672 and stored in buffer CUDA0
===================================== llama_init_from_model: f16
llama_init_from_model: n_ctx = 80128
llama_init_from_model: n_batch = 4096
llama_init_from_model: n_ubatch = 4096
llama_init_from_model: flash_attn = 1
llama_init_from_model: mla_attn = 3
llama_init_from_model: attn_max_b = 512
llama_init_from_model: fused_moe = 1
llama_init_from_model: grouped er = 1
llama_init_from_model: fused_up_gate = 1
llama_init_from_model: fused_mmad = 1
llama_init_from_model: rope_cache = 0
llama_init_from_model: graph_reuse = 1
llama_init_from_model: k_cache_hadam = 0
llama_init_from_model: split_mode_graph_scheduling = 0
llama_init_from_model: reduce_type = f16
llama_init_from_model: sched_async = 0
llama_init_from_model: ser = -1, 0
llama_init_from_model: freq_base = 1000000.0
llama_init_from_model: freq_scale = 1
llama_kv_cache_init: CUDA0 KV buffer size = 3647.83 MiB
llama_init_from_model: KV self size = 3647.79 MiB, c^KV (q8_0): 3647.79 MiB, kv^T: not used
llama_init_from_model: CUDA_Host output buffer size = 0.59 MiB
llama_init_from_model: CUDA0 compute buffer size = 8385.02 MiB
llama_init_from_model: CUDA_Host compute buffer size = 722.05 MiB
llama_init_from_model: graph nodes = 5102
llama_init_from_model: graph splits = 152
XXXXXXXXXXXXXXXXXXXXX Setting only active experts offload

main: n_kv_max = 80128, n_batch = 4096, n_ubatch = 4096, flash_attn = 1, n_gpu_layers = 99, n_threads = 101, n_threads_batch = 101

PP TG N_KV T_PP s S_PP t/s T_TG s S_TG t/s
4096 1024 0 41.114 99.63 67.358 15.20
4096 1024 4096 33.055 123.91 68.527 14.94
4096 1024 8192 33.924 120.74 82.819 12.36
4096 1024 12288 34.525 118.64 91.165 11.23
4096 1024 16384 35.190 116.40 91.311 11.21
4096 1024 20480 35.921 114.03 72.911 14.04

2026-02-16_14-30
Really playable pacman and very good model!
but it takes very long time.

@shewin

Yes, my impression too. It seems very smart and capable with opencode etc, but without lightning indexer or mtp nextn tensor support it is slow given the A40B.

Keep an eye for https://huggingface.co/Qwen/Qwen3.5-397B-A17B but need ik support here: https://github.com/ikawrakow/ik_llama.cpp/issues/1255

Sign up or log in to comment