Compressed Model: MilyaShams/Qwen3-1.7B-Pipe_Wanda24_PTQ_W8A16

This model was compressed using the llmcompressor framework.

Compression Details

  • Base Model: Qwen/Qwen3-1.7B
  • Experiment Name: Pipe_Wanda24_PTQ_W8A16
  • Recipe / Modifiers Applied:
[WandaPruningModifier(index=None, group=None, start=None, end=None, update=None, initialized_=True, finalized_=True, started_=True, ended_=True, sparsity=0.5, sparsity_profile=None, mask_structure='2:4', owl_m=None, owl_lmbda=None, sequential_update=False, sequential_targets=['Qwen3DecoderLayer'], targets=['Linear'], ignore=[]), QuantizationModifier(config_groups=None, targets=['Linear'], ignore=[], scheme='W8A16', kv_cache_scheme=None, weight_observer=None, input_observer=None, output_observer=None, observer=None, bypass_divisibility_checks=False, index=None, group=None, start=None, end=None, update=None, initialized_=True, finalized_=None, started_=True, ended_=True)]

Note: This model card was automatically generated. All structural modifiers and parameters used during compression are logged above.

Downloads last month
225
Safetensors
Model size
0.6B params
Tensor type
I32
F16
I16
Inference Providers NEW
This model isn't deployed by any Inference Provider. 馃檵 Ask for provider support

Model tree for MilyaShams/Qwen3-1.7B-Pipe_Wanda24_PTQ_W8A16

Finetuned
Qwen/Qwen3-1.7B
Quantized
(254)
this model