Update README.md
Browse files
README.md
CHANGED
|
@@ -29,6 +29,7 @@ To enable MTP layer loading and quantization, this model is adapted from `unslot
|
|
| 29 |
- When loading this model, you must set `trust_remote_code=True` to ensure that changes related to the MTP layer in `modeling_deepseek.py` take effect.
|
| 30 |
- After loading this model with `transformers`, **evaluation should NOT be performed directly**. The reason is that the forward function for the added MTP layer in `modeling_deepseek.py` is implemented only for calibration during the quantization process, so computation is not guaranteed to be the same as the original DeepSeek-R1-0528.
|
| 31 |
- Therefore, when quantizing with AMD-Quark, you **must add the `--skip_evaluation` option** to skip the evaluation step and only perform quantization.
|
|
|
|
| 32 |
|
| 33 |
Below is an example of how to quantize this model:
|
| 34 |
|
|
|
|
| 29 |
- When loading this model, you must set `trust_remote_code=True` to ensure that changes related to the MTP layer in `modeling_deepseek.py` take effect.
|
| 30 |
- After loading this model with `transformers`, **evaluation should NOT be performed directly**. The reason is that the forward function for the added MTP layer in `modeling_deepseek.py` is implemented only for calibration during the quantization process, so computation is not guaranteed to be the same as the original DeepSeek-R1-0528.
|
| 31 |
- Therefore, when quantizing with AMD-Quark, you **must add the `--skip_evaluation` option** to skip the evaluation step and only perform quantization.
|
| 32 |
+
- To skip quantization for the MTP layers, set `exclude_layers="lm_head *self_attn* *mlp.gate *eh_proj *shared_head.head model.layers.61.*"`.
|
| 33 |
|
| 34 |
Below is an example of how to quantize this model:
|
| 35 |
|