Mixed Precision GGUF layer quantization of Llama 3.1 8B Instruct by meta-llama
Original model: https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct
The hybrid quant employs different quantization levels on a per layer basis to enable both high performance and small file size at the same time. The quants employed are all K to avoid slow CPU or older GPU processing of IQ quants. For this file the Q6_K_H layer quants are as follows:
Q5_K_L : attn_v = q8_0 attn_o = q6_k ffn_d = q6_k
Q6_K_S : Q6_K
Q6_K_M : attn_v = q8_0 ffn_d = q8_0
Q6_K_L : attn_v = q8_0 attn_o = q8_0 ffn_d = q8_0
LAYER_TYPES='[
[0 ,"Q6_K_S"],[1 ,"Q6_K_S"],[2 ,"Q5_K_L"],[3 ,"Q5_K_L"],[4 ,"Q5_K_M"],[5 ,"Q5_K_M"],[6 ,"Q5_K_M"],[7 ,"Q5_K_M"],
[8 ,"Q5_K_M"],[9 ,"Q5_K_M"],[10,"Q5_K_M"],[11,"Q5_K_M"],[12,"Q5_K_L"],[13,"Q5_K_L"],[14,"Q5_K_L"],[15,"Q5_K_L"],
[16,"Q6_K_S"],[17,"Q6_K_S"],[18,"Q6_K_S"],[19,"Q6_K_S"],[20,"Q6_K_M"],[21,"Q6_K_M"],[22,"Q6_K_M"],[23,"Q6_K_M"],
[24,"Q6_K_L"],[25,"Q6_K_L"],[26,"Q6_K_L"],[27,"Q6_K_L"],[28,"Q8_0" ],[29,"Q8_0" ],[30,"Q8_0" ],[31,"Q8_0" ]
]'
FLAGS="--token-embedding-type Q6_K --output-tensor-type Q6_K --layer-types-high"
This quant is sized at ~Q6_K efficiency.
A smaller Q4_K_H quant is also available:
Q4_K_L : Q4_K_M + attn_o = q6_k
Q5_K_L : attn_v = q8_0 attn_o = q6_k ffn_d = q6_k
Q6_K_S : Q6_K
LAYER_TYPES='[
[0 ,"Q5_K_L"],[1 ,"Q5_K_M"],[2 ,"Q5_K_S"],[3 ,"Q4_K_L"],[4 ,"Q4_K_M"],[5 ,"Q4_K_S"],[6 ,"Q4_K_S"],[7 ,"Q4_K_S"],
[8 ,"Q4_K_S"],[9 ,"Q4_K_S"],[10,"Q4_K_S"],[11,"Q4_K_S"],[12,"Q4_K_S"],[13,"Q4_K_S"],[14,"Q4_K_S"],[15,"Q4_K_S"],
[16,"Q4_K_M"],[17,"Q4_K_S"],[18,"Q4_K_M"],[19,"Q4_K_S"],[20,"Q4_K_M"],[21,"Q4_K_S"],[22,"Q4_K_M"],[23,"Q4_K_S"],
[24,"Q4_K_M"],[25,"Q4_K_M"],[26,"Q4_K_M"],[27,"Q4_K_L"],[28,"Q5_K_S"],[29,"Q5_K_M"],[30,"Q5_K_L"],[31,"Q6_K_S"]
]'
FLAGS="--token-embedding-type Q4_K --output-tensor-type Q6_K --layer-types-high"
This quant is sized at ~Q4_K_M efficiency.
Both layer quants were updated on 12/19/2025 optimized for performance across a small set of curated test prompts.
Comparison:
| Quant | size | PPL | Comment |
|---|---|---|---|
| Q4_K_M | 4.9e9 | 7.3 | - |
| Q4_K_H | 5.1e9 | 7.3 | Hybrid quant with Q4_K embedding Q6_K output |
| Q6_K | 6.6e9 | 7.2 | Q6_K with default embedding and output |
| Q6_K_H | 6.6e9 | 7.2 | Hybrid quant with Q6_K embedding Q6_K output |
Usage:
This model may be used together with fixie-ai ultravox-v0_5-llama-3_1-8b to enable it to process audio (.mp3 and .wav files) and text inputs and generate text outputs. The mmproj file is made available here: https://huggingface.co/steampunque/ultravox-v0_5-llama-3_1-8b-MP-GGUF More information about running multimedia may be found in the docs in the mtmd readme in the tools directory of the llama.cpp source tree https://github.com/ggml-org/llama.cpp/blob/master/tools/mtmd/README.md.
The model can be speculated using Llama 3.2 1B Instruct. Approximate performance on a 4070 with context and weights in VRAM using a custom downstream greedy speculator with fixed spec block length ND :
| Prompt | ND | Gen TPS | Comment |
|---|---|---|---|
| goldcoin | 0 | 68 | non code |
| goldcoin | 4 | 134 | non code |
| humaneval | 0 | 68 | code |
| humaneval | 8 | 162 | code |
goldcoin:
I have 10 apples. I find 3 gold coins in the bottom of a river. The river runs near a big city that has something to do with what I can spend the coins on. I then lose 4 apples but gain a gold coin. Three birds run into my path and drop 6 apples each. I play an online game and win 6 gold coins but I have to share them equally with my 2 teammates. I buy apples for all the coins I have. The price of an apple is 0.5 coins. How many apples do I have? And where is the river? Use step-by-step reasoning to solve this problem.
humaneval:
generate python code for the described function header:
from typing import List
def has_close_elements(numbers: List[float], threshold: float) -> bool:
""" Check if in given list of numbers, are any two numbers closer to each other than
given threshold.
>>> has_close_elements([1.0, 2.0, 3.0], 0.5)
False
>>> has_close_elements([1.0, 2.8, 3.0, 4.0, 5.0, 2.0], 0.3)
True
"""
Benchmarks:
A full set of benchmarks for the model will eventually be given here: https://huggingface.co/spaces/steampunque/benchlm
Download the file from below:
| Link | Type | Size/e9 B | Notes |
|---|---|---|---|
| Llama-3.1-8B-Instruct.Q4_K_H.gguf | Q4_K_H | 5.1e9 B | ~ Q4_K_M size |
| Llama-3.1-8B-Instruct.Q6_K_H.gguf | Q6_K_H | 6.6e9 B | ~ Q6_K size |
| ultravox-v0_5-llama-3_1-8b.mmproj.gguf | F16 | 1.38e9 B | multimedia projector |
A discussion thread about the hybrid layer quant approach can be found here on the llama.cpp git repository:
- Downloads last month
- 14
6-bit
Model tree for steampunque/Llama-3.1-8B-Instruct-MP-GGUF
Base model
meta-llama/Llama-3.1-8B