Auto-Quantized GGUF Model
This repository contains automated GGUF quantization files for coder3101/gemma-4-E4B-it-heretic.
The calibration data for the imatrix is targeted at Chinese novels and role-playing (RP), while preserving logic and common sense.
imatrix ็ๆ กๅๆฐๆฎไปฅไธญๆ็ๅฐ่ฏดใ่ง่ฒๆฎๆผไธบ็ฎๆ ๏ผๅๆถไฟ็้ป่พๅๅธธ่ฏใ
๐ Perplexity Evaluation
(Tested against the provided calibration dataset)
- Base (F16/BF16): PPL = 109.1430 +/- 1.51189
- Q5_K_M: PPL = 445.1681 +/- 7.15013
This PPL measurement is definitely abnormal, and the cause is currently unknown.
- Downloads last month
- 1,383
Hardware compatibility
Log In to add your hardware
5-bit
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐ Ask for provider support
Model tree for nuofang/gemma-4-E4B-it-heretic-GGUF
Base model
google/gemma-4-E4B-it Finetuned
coder3101/gemma-4-E4B-it-heretic