N2048M commited on
Commit
906ebd4
Β·
verified Β·
1 Parent(s): dc8aaac

Add organization card with logo, model + dataset index, citation

Browse files
Files changed (3) hide show
  1. .gitattributes +1 -0
  2. README.md +82 -5
  3. logo.gif +3 -0
.gitattributes CHANGED
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ logo.gif filter=lfs diff=lfs merge=lfs -text
README.md CHANGED
@@ -1,10 +1,87 @@
1
  ---
2
- title: README
3
- emoji: πŸ¦€
4
- colorFrom: purple
5
- colorTo: yellow
6
  sdk: static
7
  pinned: false
8
  ---
9
 
10
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ title: TIDE-dllm
3
+ emoji: 🌊
4
+ colorFrom: blue
5
+ colorTo: indigo
6
  sdk: static
7
  pinned: false
8
  ---
9
 
10
+ <center>
11
+ <img src="logo.gif" width="320" />
12
+ </center>
13
+
14
+ <h1 align="center">Turning the TIDE: Cross-Architecture Distillation for Diffusion Large Language Models</h1>
15
+
16
+ <p align="center">
17
+ 🌊 The first cross-architecture distillation framework for diffusion LLMs β€” distilling 8B dense and 16B MoE teachers into a 0.6B student 🌊
18
+ </p>
19
+
20
+ <p align="center">
21
+ <a href="https://arxiv.org/abs/2604.26951">πŸ“„ arXiv 2604.26951</a> &nbsp;Β·&nbsp;
22
+ <a href="https://github.com/PKU-YuanGroup/TIDE">πŸ’» Code (PKU-YuanGroup/TIDE)</a> &nbsp;Β·&nbsp;
23
+ <a href="https://pku-yuangroup.github.io/TIDE-Page/">🌐 Project page</a>
24
+ </p>
25
+
26
+ ---
27
+
28
+ This organization hosts the **distilled student checkpoints** and **pre-tokenized SFT datasets** released with TIDE. The framework consists of three modular components β€” **TIDAL** (dual-axis interpolation), **CompDemo** (complementary mask-split teacher inference), and **Reverse CALM** (cross-tokenizer chunk-level matching) β€” and is evaluated across two heterogeneous distillation pipelines.
29
+
30
+ ## ✨ Highlights
31
+
32
+ - **+1.53 average gain** over the non-distilled BD3LM baseline across 8 benchmarks (34.20 vs. 32.67).
33
+ - **+16.48 on HumanEval** over the equivalent-size AR baseline (48.78 vs. 32.30) β€” distilled dLLMs especially excel at code generation.
34
+ - **22Γ— peak-memory reduction** vs. the 16B MoE LLaDA2 teacher (1.4 GB vs. 31.3 GB) and **5.2Γ— faster inference** (6.25 s vs. 32.55 s for 256 tokens on H100), enabling commodity-hardware deployment.
35
+
36
+ ## πŸ€– Released models
37
+
38
+ Six 0.6B distilled student checkpoints (3 per pipeline). Each is initialized from [`dllm-hub/Qwen3-0.6B-diffusion-bd3lm-v0.1`](https://huggingface.co/dllm-hub/Qwen3-0.6B-diffusion-bd3lm-v0.1) and distilled from a larger dLLM teacher.
39
+
40
+ | Pipeline | Variant | Repo |
41
+ |---|---|---|
42
+ | A β€” Cross-Tokenizer (LLaDA2 teacher) | **TIDE-Cross** (native, paper-best) | [distill-LLaDA2-TIDE_Cross](https://huggingface.co/TIDE-dllm/distill-LLaDA2-TIDE_Cross) |
43
+ | A β€” Cross-Tokenizer (LLaDA2 teacher) | TIDE-Shared variant | [distill-LLaDA2-TIDE_Shared](https://huggingface.co/TIDE-dllm/distill-LLaDA2-TIDE_Shared) |
44
+ | A β€” Cross-Tokenizer (LLaDA2 teacher) | CALM baseline | [distill-LLaDA2-CALM](https://huggingface.co/TIDE-dllm/distill-LLaDA2-CALM) |
45
+ | B β€” Shared-Tokenizer (WeDLM teacher) | **TIDE-Shared** (native, paper-best) | [distill-WeDLM-TIDE_Shared](https://huggingface.co/TIDE-dllm/distill-WeDLM-TIDE_Shared) |
46
+ | B β€” Shared-Tokenizer (WeDLM teacher) | TIDE-Cross variant | [distill-WeDLM-TIDE_Cross](https://huggingface.co/TIDE-dllm/distill-WeDLM-TIDE_Cross) |
47
+ | B β€” Shared-Tokenizer (WeDLM teacher) | KL baseline | [distill-WeDLM-KL](https://huggingface.co/TIDE-dllm/distill-WeDLM-KL) |
48
+
49
+ ## πŸ“š Released datasets
50
+
51
+ Pre-tokenized SFT mixtures (`tulu-3-sft-mixture` + `smoltalk` + `opc-sft-stage1` + `opc-sft-stage2`) prepared for each teacher, so distillation jobs never have to re-tokenize at startup.
52
+
53
+ | Pipeline | Repo |
54
+ |---|---|
55
+ | A β€” for the LLaDA2 teacher | [distill_llada2_sft](https://huggingface.co/datasets/TIDE-dllm/distill_llada2_sft) |
56
+ | B β€” for the WeDLM teacher | [distill_wedlm_sft](https://huggingface.co/datasets/TIDE-dllm/distill_wedlm_sft) |
57
+
58
+ ## πŸš€ Quick start
59
+
60
+ ```python
61
+ import torch
62
+ from transformers import AutoModelForMaskedLM, AutoTokenizer
63
+
64
+ repo = "TIDE-dllm/distill-LLaDA2-TIDE_Cross" # paper-best Pipeline-A checkpoint
65
+ device = "cuda" if torch.cuda.is_available() else "cpu"
66
+
67
+ model = AutoModelForMaskedLM.from_pretrained(
68
+ repo, dtype=torch.bfloat16, trust_remote_code=True,
69
+ ).to(device).eval()
70
+ tokenizer = AutoTokenizer.from_pretrained(repo, trust_remote_code=True)
71
+ ```
72
+
73
+ The same `generate()` routine published with [`dllm-hub/Qwen3-0.6B-diffusion-bd3lm-v0.1`](https://huggingface.co/dllm-hub/Qwen3-0.6B-diffusion-bd3lm-v0.1) works on every TIDE checkpoint β€” just swap the model name.
74
+
75
+ ## πŸ“ Citation
76
+
77
+ ```bibtex
78
+ @misc{zhang2026turningtidecrossarchitecturedistillation,
79
+ title={Turning the TIDE: Cross-Architecture Distillation for Diffusion Large Language Models},
80
+ author={Gongbo Zhang and Wen Wang and Ye Tian and Li Yuan},
81
+ year={2026},
82
+ eprint={2604.26951},
83
+ archivePrefix={arXiv},
84
+ primaryClass={cs.CL},
85
+ url={https://arxiv.org/abs/2604.26951},
86
+ }
87
+ ```
logo.gif ADDED

Git LFS Details

  • SHA256: be7bf6dd8cae18da51d0f500bf2deb43cdefa73edee233f9b6d9caf4cf24cda2
  • Pointer size: 131 Bytes
  • Size of remote file: 913 kB