File size: 214 Bytes
5ecf91a
 
 
 
 
 
 
1
2
3
4
5
6
7
{
  "quantization": "4bit_nf4",
  "base_model": "Content/MLModels/quant_fp16_spectrostream_decoder.onnx",
  "block_size": 64,
  "double_quant": true,
  "note": "4-bit quantization requires runtime dequantization"
}