google / gemma-4-26B-A4B-it

exl3 8.00bpw

QUANTIZED BY: UnstableLlama
Information
鈿狅笍 Requires ExLlamaV3 v0.0.29 (or v0.0.28 `dev` branch)
exl3 quantization of gemma-4-26B-A4B-it via exllamav3.
repo generated automatically with ezexl3.

stay tuned for measurements and possible chat template / tokenizer updates as we learn more...
CLI Download
hf download UnstableLlama/gemma-4-26B-A4B-it-exl3-8.00bpw --local-dir ./gemma-4-26B-A4B-it-exl3-8.00bpw
Downloads last month
27
Safetensors
Model size
15B params
Tensor type
BF16
F16
I16
Inference Providers NEW
This model isn't deployed by any Inference Provider. 馃檵 Ask for provider support

Model tree for UnstableLlama/gemma-4-26B-A4B-it-exl3-8.00bpw

Quantized
(165)
this model