Check out Thireus GGUF-Tool-Suite quants!

#13
by ubergarm - opened

Holy smokes, if I understand it correctly @Thireus just finished testing ~660 runs of KLD for big Qwen3.5-397B-A17B?! (60 layers x 11 tensor types per layer).

https://github.com/Thireus/GGUF-Tool-Suite/tree/main/models/Qwen3.5-397B-A17B#-how-does-it-compare-to-other-ggufs

I think the approach is to take baseline measurement of KLD using Q8_0, then do 660 runs to measure relative sensitivity of every tensor/layer, and use the tool to download the shards and glue them together with a metadata only first gguf split file..

EDIT - UPDATED GRAPH BELOW
image

Seems like it is working quite well looking at this graph!!

Amazing job, @Thireus

That's right :)
Been doing this for almost 1 year lol

The approach is described here: https://github.com/Thireus/GGUF-Tool-Suite/blob/main/docs/Benchmarking%20models%20-%20How.md

Mother of God! What an epic effort. I'm still reading in disbelief but you make even the German spirit of "document everything" seem small and pitiful! Hut ab, @Thireus !πŸ™

P.S. I'm still reading but I have a feeling some very geeky weeks are coming!

@ubergarm , you made me double check the ppl results. There was a shift of ppl assignment. Rest assured your quants are still on the same average curve! Sorry about that. πŸ˜…

PPL drift fixed

@Thireus

thanks for double checking and letting me know!

Your quants are still really good and if you're quantizing more layers likely faster than mine. I'm leaving the non-routed experts full q8_0 much of the time (the always active weights).

Also your tool allows for perfect fit for a given rig/hardware configuration which is beyond my ability to do by hand.

Appreciate you and have a great weekend!

@ubergarm , well you are producing GGUF uberquickly while with my tool it takes days to benchmark all the things. But indeed once benchmark is done there is not much else to do since the quant assign script automatically assigns the correct quant combination and produces the recipe in seconds for the target size the user selects.

Thank you for all your work as always. Specifically with the imatrix calibration dataset and files!

hi @Thireus

If I want download your GGUF without browser, just download all links and rename them into shared command prefix path with -SPECIAL_TENSOR-00001-of-01099.gguf suffix ?

https://huggingface.co/Thireus/Qwen3.5-397B-A17B-THIREUS-BF16-SPECIAL_SPLIT/resolve/main/Qwen3.5-397B-A17B-THIREUS-BF16-SPECIAL_TENSOR-00001-of-01099.gguf
https://huggingface.co/Thireus/Qwen3.5-397B-A17B-THIREUS-IQ4_KT-SPECIAL_SPLIT/resolve/main/Qwen3.5-397B-A17B-THIREUS-IQ4_KT-SPECIAL_TENSOR-00002-of-01099.gguf
https://huggingface.co/Thireus/Qwen3.5-397B-A17B-THIREUS-IQ4_KT-SPECIAL_SPLIT/resolve/main/Qwen3.5-397B-A17B-THIREUS-IQ4_KT-SPECIAL_TENSOR-00003-of-01099.gguf
https://huggingface.co/Thireus/Qwen3.5-397B-A17B-THIREUS-IQ3_K-SPECIAL_SPLIT/resolve/main/Qwen3.5-397B-A17B-THIREUS-IQ3_K-SPECIAL_TENSOR-00004-of-01099.gguf
https://huggingface.co/Thireus/Qwen3.5-397B-A17B-THIREUS-IQ3_K-SPECIAL_SPLIT/resolve/main/Qwen3.5-397B-A17B-THIREUS-IQ3_K-SPECIAL_TENSOR-00005-of-01099.gguf
https://huggingface.co/Thireus/Qwen3.5-397B-A17B-THIREUS-IQ4_KT-SPECIAL_SPLIT/resolve/main/Qwen3.5-397B-A17B-THIREUS-IQ4_KT-SPECIAL_TENSOR-00006-of-01099.gguf

@baytrail , yes that's right. make sure to rename all these .gguf with the same prefix before the "-xxxxx-of-01099.gguf", for example "my_custom_gguf-xxxxx-of-01099.gguf". Then you can either point llama-server to the first one or you can use the llama-gguf-split utility to merge them into one single gguf.

Alternatively, I can recommend to use the cli utility quant_downloader.sh from https://github.com/Thireus/GGUF-Tool-Suite as described in the readme section:

GIT_LFS_SKIP_SMUDGE=1 git clone https://github.com/Thireus/GGUF-Tool-Suite
cd GGUF-Tool-Suite
# Make sure to copy the relevant download.conf for the model before running quant_assign.py
rm -f download.conf
# Use the download.conf of the chosen model
cp -f models/DeepSeek-R1-0528/download.conf .
mkdir -p kitchen && cd kitchen
# Obtain a recipe example for the chosen model from ../recipe_examples/
../quant_downloader.sh ../recipe_examples/ik_harmonized_recipes/DeepSeek-R1-0528.THIREUS-1.9413bpw-4.3624ppl.151GB-GGUF_11GB-GPU_140GB-CPU.569b7f6_bb4f3c8.recipe

Most importantly, if you do not plan to merge the gguf shards into one single file, you'll need to make sure to have compiled ik_llama.cpp with -DGGML_MAX_CONTEXTS=2048 as described in the readme, as well as using ulimit -n 9999, otherwise you'll face a limitation that the binary cannot load that many .gguf files.

Sign up or log in to comment