Testing smol-IQ5_KS

#13
by shewin - opened

Tensor blk.47.ffn_down_exps.weight (size = 507.00 MiB) buffer type overriden to CUDA_Host

Allocating 71.86 GiB of pinned host memory, this may take a while.
Using pinned host memory improves PP performance by a significant margin.
But if it takes too long for your model and amount of patience, kill the process and run using

GGML_CUDA_NO_PINNED=1 your_command_goes_here
done allocating 71.86 GiB in 19893.7 ms

llm_load_tensors: offloading 48 repeating layers to GPU
llm_load_tensors: offloading non-repeating layers to GPU
llm_load_tensors: offloaded 49/49 layers to GPU
llm_load_tensors: CUDA_Host buffer size = 73588.97 MiB
llm_load_tensors: CUDA0 buffer size = 5607.73 MiB
...................................................................................................~ggml_backend_cuda_context: have 0 graphs
.
llama_init_from_model: n_ctx = 250112
llama_init_from_model: n_batch = 8096
llama_init_from_model: n_ubatch = 8096
llama_init_from_model: flash_attn = 1
llama_init_from_model: attn_max_b = 8096
llama_init_from_model: fused_moe = 1
llama_init_from_model: grouped er = 1
llama_init_from_model: fused_up_gate = 1
llama_init_from_model: fused_mmad = 1
llama_init_from_model: rope_cache = 0
llama_init_from_model: graph_reuse = 1
llama_init_from_model: k_cache_hadam = 0
llama_init_from_model: v_cache_hadam = 0
llama_init_from_model: split_mode_graph_scheduling = 0
llama_init_from_model: reduce_type = f16
llama_init_from_model: sched_async = 0
llama_init_from_model: ser = -1, 0
llama_init_from_model: freq_base = 10000000.0
llama_init_from_model: freq_scale = 1
llama_kv_cache_init: CUDA0 KV buffer size = 6011.06 MiB
llama_init_from_model: KV self size = 5862.00 MiB, K (f16): 2931.00 MiB, V (f16): 2931.00 MiB
llama_init_from_model: CUDA_Host output buffer size = 0.95 MiB
llama_init_from_model: CUDA0 compute buffer size = 7763.94 MiB
llama_init_from_model: CUDA_Host compute buffer size = 3957.29 MiB
llama_init_from_model: graph nodes = 3137
llama_init_from_model: graph splits = 98
llama_init_from_model: enabling only_active_experts scheduling

main: n_kv_max = 250112, n_batch = 8096, n_ubatch = 8096, flash_attn = 1, n_gpu_layers = 99, n_threads = 101, n_threads_batch = 101

PP TG N_KV T_PP s S_PP t/s T_TG s S_TG t/s
8096 2024 0 3.155 2565.69 41.277 49.04
8096 2024 8096 3.215 2518.47 42.045 48.14
8096 2024 16192 3.320 2438.25 42.695 47.41
8096 2024 24288 3.433 2358.19 44.400 45.59
8096 2024 32384 3.543 2285.07 44.800 45.18

2026-04-10_21-43

2026-04-10_22-08
Not top-tier, but fast and quite usable.

Yes, this quant is my "daily driver" full offload onto 2x A6000 GPUs (the older ones with 48GB VRAM each similar to 3090s).

It is the first local model that to me was "good enough" and "fast enough" for experimenting with opencode for local vibe coding, basic web stuff, etc. It is actually useful and saves me time from 'grep'ing all the code myself for a quick explanation etc.

I agree though, it is noticeably worse and makes more mistakes than big GLM-5.1, but as you mention, that one slows down quite a bit with long context and no -sm graph support.

Yes, this quant is my "daily driver" full offload onto 2x A6000 GPUs (the older ones with 48GB VRAM each similar to 3090s).

How about the 397B version?! I found that to be my daily driver since it fits very well on 8x3090 with minimal RAM spill. It definitely not GLM5.1 but not far at all...

@dehnhaide

I don't have enough VRAM to fully offload a reasonable 397B quant! πŸ˜›

I'm now experimenting with opencode to setup two agents, one for fast stuff (grepping and researching), and one for slow but more important stuff (writing code out)...

The config feels loosy goosey still though... too many files and options in different places.. maybe a .opencode only to keep it all in one place per project?

One night i'd love to leave a "ralph loop" running e.g. while true;do cat PROMPT.md | opencode run -;done haha.. see what monster is there in the morning, assuming i've got a kind of benchmark for it to optimize..

Sign up or log in to comment