Compressed Model: MilyaShams/Qwen3-1.7B-Wanda_1_4

This model was compressed using the llmcompressor framework.

Compression Details

  • Base Model: Qwen/Qwen3-1.7B
  • Experiment Name: Wanda_1_4
  • Recipe / Modifiers Applied:
index=None group=None start=None end=None update=None initialized_=True finalized_=True started_=True ended_=True sparsity=0.25 sparsity_profile=None mask_structure='1:4' owl_m=None owl_lmbda=None sequential_update=False sequential_targets=['Qwen3DecoderLayer'] targets=['Linear'] ignore=[]

Note: This model card was automatically generated. All structural modifiers and parameters used during compression are logged above.

Downloads last month
109
Safetensors
Model size
2B params
Tensor type
F16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for MilyaShams/Qwen3-1.7B-Wanda_1_4

Finetuned
Qwen/Qwen3-1.7B
Finetuned
(619)
this model