Compressed Model: MilyaShams/Qwen3-1.7B-AWQ_W8A8
This model was compressed using the llmcompressor framework.
Compression Details
- Base Model: Qwen/Qwen3-1.7B
- Experiment Name: AWQ_W8A8
- Recipe / Modifiers Applied:
config_groups=None targets=['Linear'] ignore=[] scheme='W8A8' kv_cache_scheme=None weight_observer=None input_observer=None output_observer=None observer=None bypass_divisibility_checks=False index=None group=None start=None end=None update=None initialized_=True finalized_=True started_=True ended_=True sequential_targets=None mappings=[AWQMapping(smooth_layer='re:.*input_layernorm$', balance_layers=['re:.*q_proj$', 're:.*k_proj$', 're:.*v_proj$'], activation_hook_target=None), AWQMapping(smooth_layer='re:.*v_proj$', balance_layers=['re:.*o_proj$'], activation_hook_target=None), AWQMapping(smooth_layer='re:.*post_attention_layernorm$', balance_layers=['re:.*gate_proj$', 're:.*up_proj$'], activation_hook_target=None), AWQMapping(smooth_layer='re:.*up_proj$', balance_layers=['re:.*down_proj$'], activation_hook_target=None)] offload_device='cpu' duo_scaling=True n_grid=20
Note: This model card was automatically generated. All structural modifiers and parameters used during compression are logged above.
- Downloads last month
- 121
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support