Safetensors
lds

Improve model card: add pipeline tag, paper link, and usage

#1
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +45 -13
README.md CHANGED
@@ -1,12 +1,14 @@
1
  ---
2
- license: apache-2.0
 
3
  datasets:
4
  - HuggingFaceFW/fineweb-edu
5
  - HuggingFaceH4/ultrachat_200k
 
6
  model_name: Qwen3_1.7B_LoopUS_SFT
7
- base_model:
8
- - Qwen/Qwen3-1.7B-Base
9
  ---
 
10
  <div align="center">
11
  <h1>LoopUS: <br> Recasting Pretrained LLMs into Looped Latent Refinement Models</h1>
12
  </div>
@@ -17,32 +19,62 @@ base_model:
17
  </p>
18
 
19
  <p align="center">
20
- <a href="https://thrillcrazyer.github.io/" target="_blank"><strong>Taekhyun Park</strong></a><sup>1</sup>,
21
- <a href="https://yongzzai.com/" target="_blank"><strong>Yongjae Lee</strong></a><sup>1</sup>,
22
- <a href="https://aidoheekim.github.io/" target="_blank"><strong>Dohee Kim</strong></a><sup>2</sup>,
23
- <a href="https://pnubaelab.github.io/" target="_blank"><strong>Hyerim Bae</string></a><sup>1,&dagger;</sup>
24
  </p>
25
 
26
  <p align="center">
27
- <a href="https://thrillcrazyer.github.io/LoopUS"><b>🌟 Github</b></a> |
28
- <a href="https://huggingface.co/Thrillcrazyer/Qwen3_1.7B_LoopUS"><b>πŸ“₯ Download</b></a> |
29
  <a href="https://arxiv.org/abs/2605.11011"><b>πŸ“„ Paper</b></a>
30
  </p>
31
 
32
- # Abstract
 
 
 
 
33
 
34
- Looped computation shows promise in improving the reasoning-oriented performance of LLMs by scaling test-time compute. However, existing approaches typically require either training recurrent models from scratch or applying disruptive retrofits, which involve substantial computational costs and may compromise pretrained capabilities. To address these limitations, we introduce \textbf{Looped Depth Up-Scaling} (LoopUS), a post-training framework that converts a standard pretrained LLM into a looped architecture. As a key technical contribution, LoopUS recasts the pretrained LLM into an encoder, a looped reasoning block, and a decoder. It operationalizes this latent-refinement architecture through four core components: (1) block decomposition, guided by staged representation dynamics; (2) an input-dependent selective gate to mitigate hidden-state drift; (3) random deep supervision for memory-efficient learning over long recursive horizons; and (4) a confidence head for adaptive early exiting. Collectively, these mechanisms transform a standard non-looped model into a looped form while stabilizing it against both computational bottlenecks and representation collapse. Through stable latent looping, LoopUS improves reasoning-oriented performance without extending the generated traces or requiring recurrent training from scratch.
 
 
 
 
35
 
36
  # QuickStart
37
 
 
 
38
  ```bash
39
  git clone https://github.com/Thrillcrazyer/LoopUS.git
40
  cd LoopUS
41
- uv run chat.py
 
 
 
 
 
 
42
  ```
43
 
44
  # Illustration of LoopUS
45
 
46
  <div align="center">
47
  <img src="https://raw.githubusercontent.com/Thrillcrazyer/LoopUS/main/assets/Framework.png" width="800"/>
48
- </div>
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ base_model:
3
+ - Qwen/Qwen3-1.7B-Base
4
  datasets:
5
  - HuggingFaceFW/fineweb-edu
6
  - HuggingFaceH4/ultrachat_200k
7
+ license: apache-2.0
8
  model_name: Qwen3_1.7B_LoopUS_SFT
9
+ pipeline_tag: text-generation
 
10
  ---
11
+
12
  <div align="center">
13
  <h1>LoopUS: <br> Recasting Pretrained LLMs into Looped Latent Refinement Models</h1>
14
  </div>
 
19
  </p>
20
 
21
  <p align="center">
22
+ <a href="https://thrillcrazyer.github.io/" target="_blank"><strong>Taekhyun Park</strong></a>,
23
+ <a href="https://yongzzai.com/" target="_blank"><strong>Yongjae Lee</strong></a>,
24
+ <a href="https://aidoheekim.github.io/" target="_blank"><strong>Dohee Kim</strong></a>,
25
+ <a href="https://pnubaelab.github.io/" target="_blank"><strong>Hyerim Bae</strong></a>
26
  </p>
27
 
28
  <p align="center">
29
+ <a href="https://github.com/Thrillcrazyer/LoopUS"><b>🌟 Github</b></a> |
30
+ <a href="https://thrillcrazyer.github.io/LoopUS"><b>🌐 Project Page</b></a> |
31
  <a href="https://arxiv.org/abs/2605.11011"><b>πŸ“„ Paper</b></a>
32
  </p>
33
 
34
+ # Overview
35
+
36
+ **Looped Depth Up-Scaling** (LoopUS) is a post-training framework that converts a standard pretrained LLM into a looped architecture. It recasts a pretrained LLM into an encoder, a looped reasoning block, and a decoder to improve reasoning-oriented performance through stable latent looping without extending output traces or requiring recurrent training from scratch.
37
+
38
+ # Method
39
 
40
+ LoopUS operationalizes its latent-refinement architecture through four core components:
41
+ 1. **Block Decomposition**: Recasts a pretrained transformer into an encoder, a looped reasoning block, and a decoder based on staged representation dynamics.
42
+ 2. **Input-Dependent Selective Gate**: Adaptively controls how refined hidden states are propagated to mitigate hidden-state drift.
43
+ 3. **Random Deep Supervision**: Applies supervision to sampled reasoning steps for memory-efficient training over long horizons.
44
+ 4. **Confidence Head**: Enables adaptive early exiting for compute-efficient inference.
45
 
46
  # QuickStart
47
 
48
+ To use this model, you need to use the implementation provided in the [official repository](https://github.com/Thrillcrazyer/LoopUS):
49
+
50
  ```bash
51
  git clone https://github.com/Thrillcrazyer/LoopUS.git
52
  cd LoopUS
53
+ uv sync
54
+ ```
55
+
56
+ You can then run the chat mode with this checkpoint:
57
+
58
+ ```bash
59
+ uv run chat.py --model-name Thrillcrazyer/Qwen3_1.7B_LoopUS_SFT
60
  ```
61
 
62
  # Illustration of LoopUS
63
 
64
  <div align="center">
65
  <img src="https://raw.githubusercontent.com/Thrillcrazyer/LoopUS/main/assets/Framework.png" width="800"/>
66
+ </div>
67
+
68
+ # Citation
69
+
70
+ ```bibtex
71
+ @misc{park2026loopus,
72
+ title={LoopUS: Recasting Pretrained LLMs into Looped Latent Refinement Models},
73
+ author={Taekhyun Park and Yongjae Lee and Dohee Kim and Hyerim Bae},
74
+ year={2026},
75
+ eprint={2605.11011},
76
+ archivePrefix={arXiv},
77
+ primaryClass={cs.CL},
78
+ url={https://arxiv.org/abs/2605.11011}
79
+ }
80
+ ```