Commit ·
bdbf0fa
0
Parent(s):
Duplicate from z-lab/Qwen3.6-35B-A3B-DFlash
Browse filesCo-authored-by: Jian Chen <jianchen0311@users.noreply.huggingface.co>
- .gitattributes +36 -0
- README.md +179 -0
- assets/dflash_system.png +3 -0
- assets/speedup.png +0 -0
- config.json +62 -0
- dflash.py +188 -0
- model.safetensors +3 -0
.gitattributes
ADDED
|
@@ -0,0 +1,36 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
*.7z filter=lfs diff=lfs merge=lfs -text
|
| 2 |
+
*.arrow filter=lfs diff=lfs merge=lfs -text
|
| 3 |
+
*.bin filter=lfs diff=lfs merge=lfs -text
|
| 4 |
+
*.bz2 filter=lfs diff=lfs merge=lfs -text
|
| 5 |
+
*.ckpt filter=lfs diff=lfs merge=lfs -text
|
| 6 |
+
*.ftz filter=lfs diff=lfs merge=lfs -text
|
| 7 |
+
*.gz filter=lfs diff=lfs merge=lfs -text
|
| 8 |
+
*.h5 filter=lfs diff=lfs merge=lfs -text
|
| 9 |
+
*.joblib filter=lfs diff=lfs merge=lfs -text
|
| 10 |
+
*.lfs.* filter=lfs diff=lfs merge=lfs -text
|
| 11 |
+
*.mlmodel filter=lfs diff=lfs merge=lfs -text
|
| 12 |
+
*.model filter=lfs diff=lfs merge=lfs -text
|
| 13 |
+
*.msgpack filter=lfs diff=lfs merge=lfs -text
|
| 14 |
+
*.npy filter=lfs diff=lfs merge=lfs -text
|
| 15 |
+
*.npz filter=lfs diff=lfs merge=lfs -text
|
| 16 |
+
*.onnx filter=lfs diff=lfs merge=lfs -text
|
| 17 |
+
*.ot filter=lfs diff=lfs merge=lfs -text
|
| 18 |
+
*.parquet filter=lfs diff=lfs merge=lfs -text
|
| 19 |
+
*.pb filter=lfs diff=lfs merge=lfs -text
|
| 20 |
+
*.pickle filter=lfs diff=lfs merge=lfs -text
|
| 21 |
+
*.pkl filter=lfs diff=lfs merge=lfs -text
|
| 22 |
+
*.pt filter=lfs diff=lfs merge=lfs -text
|
| 23 |
+
*.pth filter=lfs diff=lfs merge=lfs -text
|
| 24 |
+
*.rar filter=lfs diff=lfs merge=lfs -text
|
| 25 |
+
*.safetensors filter=lfs diff=lfs merge=lfs -text
|
| 26 |
+
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
| 27 |
+
*.tar.* filter=lfs diff=lfs merge=lfs -text
|
| 28 |
+
*.tar filter=lfs diff=lfs merge=lfs -text
|
| 29 |
+
*.tflite filter=lfs diff=lfs merge=lfs -text
|
| 30 |
+
*.tgz filter=lfs diff=lfs merge=lfs -text
|
| 31 |
+
*.wasm filter=lfs diff=lfs merge=lfs -text
|
| 32 |
+
*.xz filter=lfs diff=lfs merge=lfs -text
|
| 33 |
+
*.zip filter=lfs diff=lfs merge=lfs -text
|
| 34 |
+
*.zst filter=lfs diff=lfs merge=lfs -text
|
| 35 |
+
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
| 36 |
+
assets/dflash_system.png filter=lfs diff=lfs merge=lfs -text
|
README.md
ADDED
|
@@ -0,0 +1,179 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: mit
|
| 3 |
+
library_name: transformers
|
| 4 |
+
pipeline_tag: text-generation
|
| 5 |
+
tags:
|
| 6 |
+
- dflash
|
| 7 |
+
- speculative-decoding
|
| 8 |
+
- block-diffusion
|
| 9 |
+
- draft-model
|
| 10 |
+
- efficiency
|
| 11 |
+
- qwen
|
| 12 |
+
- diffusion-language-model
|
| 13 |
+
---
|
| 14 |
+
|
| 15 |
+
# Qwen3.6-35B-A3B-DFlash
|
| 16 |
+
|
| 17 |
+
[**Paper**](https://arxiv.org/abs/2602.06036) | [**GitHub**](https://github.com/z-lab/dflash) | [**Blog**](https://z-lab.ai/projects/dflash/)
|
| 18 |
+
|
| 19 |
+
**DFlash** is a speculative decoding method that uses a lightweight **block diffusion** model to draft multiple tokens in parallel. This is the drafter model, which must be paired with [Qwen/Qwen3.6-35B-A3B](https://huggingface.co/Qwen/Qwen3.6-35B-A3B).
|
| 20 |
+
|
| 21 |
+
<div align="center">
|
| 22 |
+
<img src="assets/dflash_system.png" alt="DFlash Architecture" width="85%">
|
| 23 |
+
</div>
|
| 24 |
+
|
| 25 |
+
## Quick Start
|
| 26 |
+
|
| 27 |
+
### Installation
|
| 28 |
+
|
| 29 |
+
vLLM:
|
| 30 |
+
```bash
|
| 31 |
+
uv pip install vllm
|
| 32 |
+
uv pip install -U vllm --torch-backend=auto --extra-index-url https://wheels.vllm.ai/nightly
|
| 33 |
+
```
|
| 34 |
+
|
| 35 |
+
SGLang:
|
| 36 |
+
```bash
|
| 37 |
+
uv pip install "git+https://github.com/sgl-project/sglang.git@refs/pull/20547/head#subdirectory=python"
|
| 38 |
+
```
|
| 39 |
+
|
| 40 |
+
### Launch Server
|
| 41 |
+
|
| 42 |
+
vLLM:
|
| 43 |
+
```bash
|
| 44 |
+
vllm serve Qwen/Qwen3.6-35B-A3B \
|
| 45 |
+
--speculative-config '{"method": "dflash", "model": "z-lab/Qwen3.6-35B-A3B-DFlash", "num_speculative_tokens": 15}' \
|
| 46 |
+
--attention-backend flash_attn \
|
| 47 |
+
--max-num-batched-tokens 32768
|
| 48 |
+
```
|
| 49 |
+
|
| 50 |
+
SGLang:
|
| 51 |
+
```bash
|
| 52 |
+
# Optional: enable schedule overlapping (experimental, may not be stable)
|
| 53 |
+
# export SGLANG_ENABLE_SPEC_V2=1
|
| 54 |
+
# export SGLANG_ENABLE_DFLASH_SPEC_V2=1
|
| 55 |
+
# export SGLANG_ENABLE_OVERLAP_PLAN_STREAM=1
|
| 56 |
+
|
| 57 |
+
python -m sglang.launch_server \
|
| 58 |
+
--model-path Qwen/Qwen3.6-35B-A3B \
|
| 59 |
+
--speculative-algorithm DFLASH \
|
| 60 |
+
--speculative-draft-model-path z-lab/Qwen3.6-35B-A3B-DFlash \
|
| 61 |
+
--speculative-num-draft-tokens 16 \
|
| 62 |
+
--tp-size 1 \
|
| 63 |
+
--attention-backend fa3 \
|
| 64 |
+
--mem-fraction-static 0.75 \
|
| 65 |
+
--mamba-scheduler-strategy extra_buffer \
|
| 66 |
+
--trust-remote-code
|
| 67 |
+
```
|
| 68 |
+
> **Tip:** For long-context or agentic workloads, add `--speculative-dflash-draft-window-size WINDOW_SIZE` to enable sliding-window attention for the drafter.
|
| 69 |
+
|
| 70 |
+
### Usage
|
| 71 |
+
|
| 72 |
+
```python
|
| 73 |
+
from openai import OpenAI
|
| 74 |
+
|
| 75 |
+
client = OpenAI(base_url="http://localhost:30000/v1", api_key="EMPTY")
|
| 76 |
+
|
| 77 |
+
response = client.chat.completions.create(
|
| 78 |
+
model="Qwen/Qwen3.6-35B-A3B",
|
| 79 |
+
messages=[{"role": "user", "content": "Write a quicksort in Python."}],
|
| 80 |
+
max_tokens=4096,
|
| 81 |
+
temperature=0.0
|
| 82 |
+
)
|
| 83 |
+
print(response.choices[0].message.content)
|
| 84 |
+
```
|
| 85 |
+
|
| 86 |
+
## Benchmark Results
|
| 87 |
+
|
| 88 |
+
**Setup:** Single NVIDIA B200, SGLang, thinking enabled, max output length 4096. We report end-to-end throughput, including prefill time. See our [GitHub repository](https://github.com/z-lab/dflash) for reproduction scripts.
|
| 89 |
+
|
| 90 |
+
### Throughput and Speedup
|
| 91 |
+
|
| 92 |
+
DFlash achieves up to **2.9x** speedup at concurrency 1.
|
| 93 |
+
|
| 94 |
+
_Tokens/sec (speedup vs. autoregressive baseline)_
|
| 95 |
+
|
| 96 |
+
**Block Size = 16**
|
| 97 |
+
| Task | Concurrency | AR | **DFlash** |
|
| 98 |
+
|---|---:|---:|---:|
|
| 99 |
+
| Math500 | 1 | 234 | **682 (2.9x)** |
|
| 100 |
+
| | 8 | 1266 | **3138 (2.5x)** |
|
| 101 |
+
| | 16 | 1954 | **4813 (2.5x)** |
|
| 102 |
+
| | 32 | 2755 | **6520 (2.4x)** |
|
| 103 |
+
| GSM8K | 1 | 235 | **556 (2.4x)** |
|
| 104 |
+
| | 8 | 1236 | **2564 (2.1x)** |
|
| 105 |
+
| | 16 | 1886 | **3821 (2.0x)** |
|
| 106 |
+
| | 32 | 2699 | **5239 (1.9x)** |
|
| 107 |
+
| HumanEval | 1 | 238 | **603 (2.5x)** |
|
| 108 |
+
| | 8 | 1255 | **2800 (2.2x)** |
|
| 109 |
+
| | 16 | 1944 | **4208 (2.2x)** |
|
| 110 |
+
| | 32 | 2767 | **5782 (2.1x)** |
|
| 111 |
+
| MBPP | 1 | 235 | **559 (2.4x)** |
|
| 112 |
+
| | 8 | 1224 | **2538 (2.1x)** |
|
| 113 |
+
| | 16 | 1948 | **3816 (2.0x)** |
|
| 114 |
+
| | 32 | 2780 | **5378 (1.9x)** |
|
| 115 |
+
| MT-Bench | 1 | 233 | **442 (1.9x)** |
|
| 116 |
+
| | 8 | 1238 | **2028 (1.6x)** |
|
| 117 |
+
| | 16 | 1885 | **2997 (1.6x)** |
|
| 118 |
+
| | 32 | 2633 | **4034 (1.5x)** |
|
| 119 |
+
| Alpaca | 1 | 235 | **393 (1.7x)** |
|
| 120 |
+
| | 8 | 1221 | **1782 (1.5x)** |
|
| 121 |
+
| | 16 | 1844 | **2567 (1.4x)** |
|
| 122 |
+
| | 32 | 2579 | **3689 (1.4x)** |
|
| 123 |
+
|
| 124 |
+
**Block Size = 8**
|
| 125 |
+
| Task | Concurrency | AR | **DFlash** |
|
| 126 |
+
|---|---:|---:|---:|
|
| 127 |
+
| Math500 | 1 | 234 | **617 (2.6x)** |
|
| 128 |
+
| | 8 | 1266 | **2839 (2.2x)** |
|
| 129 |
+
| | 16 | 1954 | **4465 (2.3x)** |
|
| 130 |
+
| | 32 | 2755 | **6614 (2.4x)** |
|
| 131 |
+
| GSM8K | 1 | 235 | **540 (2.3x)** |
|
| 132 |
+
| | 8 | 1236 | **2466 (2.0x)** |
|
| 133 |
+
| | 16 | 1886 | **3899 (2.1x)** |
|
| 134 |
+
| | 32 | 2699 | **5713 (2.1x)** |
|
| 135 |
+
| HumanEval | 1 | 238 | **561 (2.4x)** |
|
| 136 |
+
| | 8 | 1255 | **2655 (2.1x)** |
|
| 137 |
+
| | 16 | 1944 | **4135 (2.1x)** |
|
| 138 |
+
| | 32 | 2767 | **6059 (2.2x)** |
|
| 139 |
+
| MBPP | 1 | 235 | **497 (2.1x)** |
|
| 140 |
+
| | 8 | 1224 | **2324 (1.9x)** |
|
| 141 |
+
| | 16 | 1948 | **3636 (1.9x)** |
|
| 142 |
+
| | 32 | 2780 | **4884 (1.8x)** |
|
| 143 |
+
| MT-Bench | 1 | 233 | **438 (1.9x)** |
|
| 144 |
+
| | 8 | 1238 | **2060 (1.7x)** |
|
| 145 |
+
| | 16 | 1885 | **3182 (1.7x)** |
|
| 146 |
+
| | 32 | 2633 | **4720 (1.8x)** |
|
| 147 |
+
| Alpaca | 1 | 235 | **407 (1.7x)** |
|
| 148 |
+
| | 8 | 1221 | **1880 (1.5x)** |
|
| 149 |
+
| | 16 | 1844 | **2903 (1.6x)** |
|
| 150 |
+
| | 32 | 2579 | **4115 (1.6x)** |
|
| 151 |
+
|
| 152 |
+
### Acceptance Length
|
| 153 |
+
|
| 154 |
+
| Task | B8 | B16 |
|
| 155 |
+
|---|---:|---:|
|
| 156 |
+
| Math500 | 5.56 | 7.35 |
|
| 157 |
+
| GSM8K | 5.21 | 6.73 |
|
| 158 |
+
| HumanEval | 5.09 | 6.44 |
|
| 159 |
+
| MBPP | 4.78 | 5.83 |
|
| 160 |
+
| MT-Bench | 4.20 | 5.14 |
|
| 161 |
+
| Alpaca | 3.94 | 4.62 |
|
| 162 |
+
|
| 163 |
+
|
| 164 |
+
## Acknowledgements
|
| 165 |
+
|
| 166 |
+
Special thanks to [David Wang](https://davidwa.ng/) for his outstanding engineering support on this project. We are also grateful to [Modal](https://modal.com/), [InnoMatrix](https://innomatrix.ai), and [Yotta Labs](https://www.yottalabs.ai/) for providing the compute resources used to train this draft model.
|
| 167 |
+
|
| 168 |
+
## Citation
|
| 169 |
+
|
| 170 |
+
If you find DFlash useful, please cite our work. To share feedback on DFlash or request new model support, please fill out this form: [DFlash Feedback](https://forms.gle/4YNwfqb4nJdqn6hq9).
|
| 171 |
+
|
| 172 |
+
```bibtex
|
| 173 |
+
@article{chen2026dflash,
|
| 174 |
+
title = {{DFlash: Block Diffusion for Flash Speculative Decoding}},
|
| 175 |
+
author = {Chen, Jian and Liang, Yesheng and Liu, Zhijian},
|
| 176 |
+
journal = {arXiv preprint arXiv:2602.06036},
|
| 177 |
+
year = {2026}
|
| 178 |
+
}
|
| 179 |
+
```
|
assets/dflash_system.png
ADDED
|
Git LFS Details
|
assets/speedup.png
ADDED
|
config.json
ADDED
|
@@ -0,0 +1,62 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"architectures": [
|
| 3 |
+
"DFlashDraftModel"
|
| 4 |
+
],
|
| 5 |
+
"attention_bias": false,
|
| 6 |
+
"attention_dropout": 0.0,
|
| 7 |
+
"auto_map": {
|
| 8 |
+
"AutoModel": "dflash.DFlashDraftModel"
|
| 9 |
+
},
|
| 10 |
+
"block_size": 16,
|
| 11 |
+
"dflash_config": {
|
| 12 |
+
"mask_token_id": 248070,
|
| 13 |
+
"target_layer_ids": [
|
| 14 |
+
1,
|
| 15 |
+
10,
|
| 16 |
+
19,
|
| 17 |
+
28,
|
| 18 |
+
37
|
| 19 |
+
]
|
| 20 |
+
},
|
| 21 |
+
"dtype": "bfloat16",
|
| 22 |
+
"eos_token_id": 248046,
|
| 23 |
+
"head_dim": 128,
|
| 24 |
+
"hidden_act": "silu",
|
| 25 |
+
"hidden_size": 2048,
|
| 26 |
+
"initializer_range": 0.02,
|
| 27 |
+
"intermediate_size": 6144,
|
| 28 |
+
"layer_types": [
|
| 29 |
+
"full_attention",
|
| 30 |
+
"full_attention",
|
| 31 |
+
"full_attention",
|
| 32 |
+
"full_attention",
|
| 33 |
+
"full_attention",
|
| 34 |
+
"full_attention",
|
| 35 |
+
"full_attention",
|
| 36 |
+
"full_attention"
|
| 37 |
+
],
|
| 38 |
+
"max_position_embeddings": 262144,
|
| 39 |
+
"max_window_layers": 8,
|
| 40 |
+
"model_type": "qwen3",
|
| 41 |
+
"num_attention_heads": 32,
|
| 42 |
+
"num_hidden_layers": 8,
|
| 43 |
+
"num_key_value_heads": 4,
|
| 44 |
+
"num_target_layers": 40,
|
| 45 |
+
"pad_token_id": 248044,
|
| 46 |
+
"rms_norm_eps": 1e-06,
|
| 47 |
+
"rope_scaling": {
|
| 48 |
+
"beta_fast": 32.0,
|
| 49 |
+
"beta_slow": 1.0,
|
| 50 |
+
"factor": 64.0,
|
| 51 |
+
"original_max_position_embeddings": 4096,
|
| 52 |
+
"rope_type": "yarn",
|
| 53 |
+
"type": "yarn"
|
| 54 |
+
},
|
| 55 |
+
"rope_theta": 10000000,
|
| 56 |
+
"sliding_window": null,
|
| 57 |
+
"tie_word_embeddings": false,
|
| 58 |
+
"transformers_version": "4.57.1",
|
| 59 |
+
"use_cache": false,
|
| 60 |
+
"use_sliding_window": false,
|
| 61 |
+
"vocab_size": 248320
|
| 62 |
+
}
|
dflash.py
ADDED
|
@@ -0,0 +1,188 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
from typing import Optional, Callable
|
| 2 |
+
from typing_extensions import Unpack, Tuple
|
| 3 |
+
import torch
|
| 4 |
+
from torch import nn
|
| 5 |
+
from transformers.models.qwen3.modeling_qwen3 import (
|
| 6 |
+
Qwen3RMSNorm,
|
| 7 |
+
Qwen3RotaryEmbedding,
|
| 8 |
+
Qwen3Config,
|
| 9 |
+
Qwen3PreTrainedModel,
|
| 10 |
+
Qwen3MLP,
|
| 11 |
+
GradientCheckpointingLayer,
|
| 12 |
+
FlashAttentionKwargs,
|
| 13 |
+
rotate_half,
|
| 14 |
+
eager_attention_forward,
|
| 15 |
+
ALL_ATTENTION_FUNCTIONS,
|
| 16 |
+
)
|
| 17 |
+
from transformers.modeling_outputs import CausalLMOutputWithPast
|
| 18 |
+
from transformers.cache_utils import Cache
|
| 19 |
+
|
| 20 |
+
def apply_rotary_pos_emb(q, k, cos, sin, position_ids=None, unsqueeze_dim=1):
|
| 21 |
+
cos = cos.unsqueeze(unsqueeze_dim)
|
| 22 |
+
sin = sin.unsqueeze(unsqueeze_dim)
|
| 23 |
+
q_len = q.size(-2)
|
| 24 |
+
q_embed = (q * cos[..., -q_len:, :]) + (rotate_half(q) * sin[..., -q_len:, :])
|
| 25 |
+
k_embed = (k * cos) + (rotate_half(k) * sin)
|
| 26 |
+
return q_embed, k_embed
|
| 27 |
+
|
| 28 |
+
class Qwen3DFlashAttention(nn.Module):
|
| 29 |
+
"""Multi-headed attention from 'Attention Is All You Need' paper"""
|
| 30 |
+
|
| 31 |
+
def __init__(self, config: Qwen3Config, layer_idx: int):
|
| 32 |
+
super().__init__()
|
| 33 |
+
self.config = config
|
| 34 |
+
self.layer_idx = layer_idx
|
| 35 |
+
self.head_dim = getattr(config, "head_dim", config.hidden_size // config.num_attention_heads)
|
| 36 |
+
self.num_key_value_groups = config.num_attention_heads // config.num_key_value_heads
|
| 37 |
+
self.scaling = self.head_dim**-0.5
|
| 38 |
+
self.attention_dropout = config.attention_dropout
|
| 39 |
+
self.is_causal = False
|
| 40 |
+
self.q_proj = nn.Linear(
|
| 41 |
+
config.hidden_size, config.num_attention_heads * self.head_dim, bias=config.attention_bias
|
| 42 |
+
)
|
| 43 |
+
self.k_proj = nn.Linear(
|
| 44 |
+
config.hidden_size, config.num_key_value_heads * self.head_dim, bias=config.attention_bias
|
| 45 |
+
)
|
| 46 |
+
self.v_proj = nn.Linear(
|
| 47 |
+
config.hidden_size, config.num_key_value_heads * self.head_dim, bias=config.attention_bias
|
| 48 |
+
)
|
| 49 |
+
self.o_proj = nn.Linear(
|
| 50 |
+
config.num_attention_heads * self.head_dim, config.hidden_size, bias=config.attention_bias
|
| 51 |
+
)
|
| 52 |
+
self.q_norm = Qwen3RMSNorm(self.head_dim, eps=config.rms_norm_eps)
|
| 53 |
+
self.k_norm = Qwen3RMSNorm(self.head_dim, eps=config.rms_norm_eps)
|
| 54 |
+
self.sliding_window = config.sliding_window if config.layer_types[layer_idx] == "sliding_attention" else None
|
| 55 |
+
|
| 56 |
+
def forward(
|
| 57 |
+
self,
|
| 58 |
+
hidden_states: torch.Tensor,
|
| 59 |
+
target_hidden: torch.Tensor,
|
| 60 |
+
position_embeddings: tuple[torch.Tensor, torch.Tensor],
|
| 61 |
+
attention_mask: Optional[torch.Tensor],
|
| 62 |
+
past_key_values: Optional[Cache] = None,
|
| 63 |
+
cache_position: Optional[torch.LongTensor] = None,
|
| 64 |
+
**kwargs: Unpack[FlashAttentionKwargs],
|
| 65 |
+
) -> tuple[torch.Tensor, Optional[torch.Tensor]]:
|
| 66 |
+
bsz, q_len = hidden_states.shape[:-1]
|
| 67 |
+
ctx_len = target_hidden.shape[1]
|
| 68 |
+
q = self.q_proj(hidden_states)
|
| 69 |
+
q = q.view(bsz, q_len, -1, self.head_dim)
|
| 70 |
+
q = self.q_norm(q).transpose(1, 2)
|
| 71 |
+
k_ctx = self.k_proj(target_hidden)
|
| 72 |
+
k_noise = self.k_proj(hidden_states)
|
| 73 |
+
v_ctx = self.v_proj(target_hidden)
|
| 74 |
+
v_noise = self.v_proj(hidden_states)
|
| 75 |
+
k = torch.cat([k_ctx, k_noise], dim=1).view(bsz, ctx_len + q_len, -1, self.head_dim)
|
| 76 |
+
v = torch.cat([v_ctx, v_noise], dim=1).view(bsz, ctx_len + q_len, -1, self.head_dim)
|
| 77 |
+
k = self.k_norm(k).transpose(1, 2)
|
| 78 |
+
v = v.transpose(1, 2)
|
| 79 |
+
cos, sin = position_embeddings
|
| 80 |
+
q, k = apply_rotary_pos_emb(q, k, cos, sin)
|
| 81 |
+
if past_key_values is not None:
|
| 82 |
+
cache_kwargs = {"sin": sin, "cos": cos, "cache_position": cache_position}
|
| 83 |
+
k, v = past_key_values.update(k, v, self.layer_idx, cache_kwargs)
|
| 84 |
+
attn_fn: Callable = eager_attention_forward
|
| 85 |
+
if self.config._attn_implementation != "eager":
|
| 86 |
+
attn_fn = ALL_ATTENTION_FUNCTIONS[self.config._attn_implementation]
|
| 87 |
+
attn_output, attn_weights = attn_fn(
|
| 88 |
+
self,
|
| 89 |
+
q,
|
| 90 |
+
k,
|
| 91 |
+
v,
|
| 92 |
+
attention_mask,
|
| 93 |
+
dropout=0.0 if not self.training else self.attention_dropout,
|
| 94 |
+
scaling=self.scaling,
|
| 95 |
+
sliding_window=self.sliding_window,
|
| 96 |
+
**kwargs,
|
| 97 |
+
)
|
| 98 |
+
attn_output = attn_output.reshape(bsz, q_len, -1)
|
| 99 |
+
attn_output = self.o_proj(attn_output)
|
| 100 |
+
return attn_output, attn_weights
|
| 101 |
+
|
| 102 |
+
class Qwen3DFlashDecoderLayer(GradientCheckpointingLayer):
|
| 103 |
+
def __init__(self, config: Qwen3Config, layer_idx: int):
|
| 104 |
+
super().__init__()
|
| 105 |
+
self.hidden_size = config.hidden_size
|
| 106 |
+
self.self_attn = Qwen3DFlashAttention(config=config, layer_idx=layer_idx)
|
| 107 |
+
self.mlp = Qwen3MLP(config)
|
| 108 |
+
self.input_layernorm = Qwen3RMSNorm(config.hidden_size, eps=config.rms_norm_eps)
|
| 109 |
+
self.post_attention_layernorm = Qwen3RMSNorm(config.hidden_size, eps=config.rms_norm_eps)
|
| 110 |
+
|
| 111 |
+
def forward(
|
| 112 |
+
self,
|
| 113 |
+
target_hidden: Optional[torch.Tensor] = None,
|
| 114 |
+
hidden_states: Optional[torch.Tensor] = None,
|
| 115 |
+
attention_mask: Optional[torch.Tensor] = None,
|
| 116 |
+
position_ids: Optional[torch.LongTensor] = None,
|
| 117 |
+
past_key_value: Optional[Cache] = None,
|
| 118 |
+
output_attentions: Optional[bool] = False,
|
| 119 |
+
use_cache: Optional[bool] = False,
|
| 120 |
+
cache_position: Optional[torch.LongTensor] = None,
|
| 121 |
+
position_embeddings: Optional[Tuple[torch.Tensor, torch.Tensor]] = None, # necessary, but kept here for BC
|
| 122 |
+
**kwargs: Unpack[FlashAttentionKwargs],
|
| 123 |
+
) -> Tuple[torch.FloatTensor, Optional[Tuple[torch.FloatTensor, torch.FloatTensor]]]:
|
| 124 |
+
residual = hidden_states
|
| 125 |
+
hidden_states = self.input_layernorm(hidden_states)
|
| 126 |
+
hidden_states = self.self_attn(
|
| 127 |
+
hidden_states=hidden_states,
|
| 128 |
+
target_hidden=target_hidden,
|
| 129 |
+
attention_mask=attention_mask,
|
| 130 |
+
position_ids=position_ids,
|
| 131 |
+
past_key_values=past_key_value,
|
| 132 |
+
output_attentions=output_attentions,
|
| 133 |
+
use_cache=use_cache,
|
| 134 |
+
cache_position=cache_position,
|
| 135 |
+
position_embeddings=position_embeddings,
|
| 136 |
+
**kwargs,
|
| 137 |
+
)[0]
|
| 138 |
+
hidden_states = residual + hidden_states
|
| 139 |
+
residual = hidden_states
|
| 140 |
+
hidden_states = self.post_attention_layernorm(hidden_states)
|
| 141 |
+
hidden_states = self.mlp(hidden_states)
|
| 142 |
+
hidden_states = residual + hidden_states
|
| 143 |
+
return hidden_states
|
| 144 |
+
|
| 145 |
+
class DFlashDraftModel(Qwen3PreTrainedModel):
|
| 146 |
+
config_class = Qwen3Config
|
| 147 |
+
_no_split_modules = ["Qwen3DFlashDecoderLayer"]
|
| 148 |
+
|
| 149 |
+
def __init__(self, config) -> None:
|
| 150 |
+
super().__init__(config)
|
| 151 |
+
self.config = config
|
| 152 |
+
self.layers = nn.ModuleList(
|
| 153 |
+
[Qwen3DFlashDecoderLayer(config, layer_idx) for layer_idx in range(config.num_hidden_layers)]
|
| 154 |
+
)
|
| 155 |
+
self.target_layer_ids = self.config.dflash_config.get("target_layer_ids", None)
|
| 156 |
+
self.norm = Qwen3RMSNorm(config.hidden_size, eps=config.rms_norm_eps)
|
| 157 |
+
self.rotary_emb = Qwen3RotaryEmbedding(config)
|
| 158 |
+
self.fc = nn.Linear(len(self.target_layer_ids) * config.hidden_size, config.hidden_size, bias=False)
|
| 159 |
+
self.hidden_norm = Qwen3RMSNorm(config.hidden_size, eps=config.rms_norm_eps)
|
| 160 |
+
self.block_size = config.block_size
|
| 161 |
+
self.mask_token_id = self.config.dflash_config.get("mask_token_id", None)
|
| 162 |
+
self.post_init()
|
| 163 |
+
|
| 164 |
+
def forward(
|
| 165 |
+
self,
|
| 166 |
+
position_ids: torch.LongTensor,
|
| 167 |
+
attention_mask: Optional[torch.Tensor] = None,
|
| 168 |
+
noise_embedding: Optional[torch.Tensor] = None,
|
| 169 |
+
target_hidden: Optional[torch.Tensor] = None,
|
| 170 |
+
past_key_values: Optional[Cache] = None,
|
| 171 |
+
use_cache: bool = False,
|
| 172 |
+
**kwargs,
|
| 173 |
+
) -> CausalLMOutputWithPast:
|
| 174 |
+
hidden_states = noise_embedding
|
| 175 |
+
target_hidden = self.hidden_norm(self.fc(target_hidden))
|
| 176 |
+
position_embeddings = self.rotary_emb(hidden_states, position_ids)
|
| 177 |
+
for layer in self.layers:
|
| 178 |
+
hidden_states = layer(
|
| 179 |
+
hidden_states=hidden_states,
|
| 180 |
+
target_hidden=target_hidden,
|
| 181 |
+
attention_mask=attention_mask,
|
| 182 |
+
position_ids=position_ids,
|
| 183 |
+
past_key_value=past_key_values,
|
| 184 |
+
use_cache=use_cache,
|
| 185 |
+
position_embeddings=position_embeddings,
|
| 186 |
+
**kwargs,
|
| 187 |
+
)
|
| 188 |
+
return self.norm(hidden_states)
|
model.safetensors
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:6db5c712b4f3d924026162ad1aedf7fd1fef32437690451137f967d9b7160144
|
| 3 |
+
size 948000184
|