Add pipeline tag and improve model card

#1
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +50 -13
README.md CHANGED
@@ -1,14 +1,16 @@
1
  ---
2
- license: apache-2.0
 
3
  datasets:
4
  - HuggingFaceFW/fineweb-edu
 
5
  model_name: Qwen3_1.7B_LoopUS_SFT
6
- base_model:
7
- - microsoft/phi-4
8
  tags:
9
  - LoopUS
10
  - LoopedTransformers
11
  ---
 
12
  <div align="center">
13
  <h1>LoopUS: <br> Recasting Pretrained LLMs into Looped Latent Refinement Models</h1>
14
  </div>
@@ -26,26 +28,61 @@ tags:
26
  </p>
27
 
28
  <p align="center">
29
- <a href="https://thrillcrazyer.github.io/LoopUS"><b>🌟 Github</b></a> |
30
- <a href="https://huggingface.co/Thrillcrazyer/Qwen3_1.7B_LoopUS"><b>πŸ“₯ Download</b></a> |
31
  <a href="https://arxiv.org/abs/2605.11011"><b>πŸ“„ Paper</b></a>
32
  </p>
33
 
 
 
 
 
 
 
 
34
 
35
- # Abstract
36
 
37
- Looped computation shows promise in improving the reasoning-oriented performance of LLMs by scaling test-time compute. However, existing approaches typically require either training recurrent models from scratch or applying disruptive retrofits, which involve substantial computational costs and may compromise pretrained capabilities. To address these limitations, we introduce \textbf{Looped Depth Up-Scaling} (LoopUS), a post-training framework that converts a standard pretrained LLM into a looped architecture. As a key technical contribution, LoopUS recasts the pretrained LLM into an encoder, a looped reasoning block, and a decoder. It operationalizes this latent-refinement architecture through four core components: (1) block decomposition, guided by staged representation dynamics; (2) an input-dependent selective gate to mitigate hidden-state drift; (3) random deep supervision for memory-efficient learning over long recursive horizons; and (4) a confidence head for adaptive early exiting. Collectively, these mechanisms transform a standard non-looped model into a looped form while stabilizing it against both computational bottlenecks and representation collapse. Through stable latent looping, LoopUS improves reasoning-oriented performance without extending the generated traces or requiring recurrent training from scratch.
38
 
39
- # QuickStart
 
 
 
 
 
 
40
 
41
  ```bash
42
  git clone https://github.com/Thrillcrazyer/LoopUS.git
43
  cd LoopUS
44
- uv run chat.py
45
  ```
46
 
47
- # Illustration of LoopUS
 
 
 
48
 
49
- <div align="center">
50
- <img src="https://raw.githubusercontent.com/Thrillcrazyer/LoopUS/main/assets/Framework.png" width="800"/>
51
- </div>
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ base_model:
3
+ - microsoft/phi-4
4
  datasets:
5
  - HuggingFaceFW/fineweb-edu
6
+ license: apache-2.0
7
  model_name: Qwen3_1.7B_LoopUS_SFT
8
+ pipeline_tag: text-generation
 
9
  tags:
10
  - LoopUS
11
  - LoopedTransformers
12
  ---
13
+
14
  <div align="center">
15
  <h1>LoopUS: <br> Recasting Pretrained LLMs into Looped Latent Refinement Models</h1>
16
  </div>
 
28
  </p>
29
 
30
  <p align="center">
31
+ <a href="https://github.com/Thrillcrazyer/LoopUS"><b>🌟 Github</b></a> |
32
+ <a href="https://thrillcrazyer.github.io/LoopUS"><b>🌐 Project Page</b></a> |
33
  <a href="https://arxiv.org/abs/2605.11011"><b>πŸ“„ Paper</b></a>
34
  </p>
35
 
36
+ # Overview
37
+
38
+ **Looped Depth Up-Scaling** (LoopUS) is a post-training framework that converts a standard pretrained LLM into a looped architecture. LoopUS recasts the pretrained LLM into an encoder, a looped reasoning block, and a decoder. It operationalizes this latent-refinement architecture through:
39
+ 1. **Block Decomposition:** Recasts a pretrained transformer into a reusable latent-refinement architecture.
40
+ 2. **Input-Dependent Selective Gate:** Adaptively controls hidden state propagation to mitigate drift.
41
+ 3. **Random Deep Supervision:** Enables memory-efficient learning over long recursive horizons.
42
+ 4. **Confidence Head:** Allows for adaptive early exiting during inference.
43
 
44
+ Through stable latent looping, LoopUS improves reasoning-oriented performance without extending the generated traces or requiring recurrent training from scratch.
45
 
46
+ # Illustration of LoopUS
47
 
48
+ <div align="center">
49
+ <img src="https://raw.githubusercontent.com/Thrillcrazyer/LoopUS/main/assets/Framework.png" width="800"/>
50
+ </div>
51
+
52
+ # Quick Start
53
+
54
+ To use this model, please follow the installation instructions in the [official repository](https://github.com/Thrillcrazyer/LoopUS):
55
 
56
  ```bash
57
  git clone https://github.com/Thrillcrazyer/LoopUS.git
58
  cd LoopUS
59
+ uv sync
60
  ```
61
 
62
+ ### Chatting Mode
63
+ ```bash
64
+ uv run chat.py --model-name Thrillcrazyer/Qwen3_1.7B_LoopUS_SFT
65
+ ```
66
 
67
+ ### Qualitative Generation
68
+ ```bash
69
+ uv run LoopUS-generate \
70
+ --model-name microsoft/phi-4 \
71
+ --decomposed-model Thrillcrazyer/Qwen3_1.7B_LoopUS_SFT \
72
+ --prompt "The meaning of life is" \
73
+ --n-recursion 8
74
+ ```
75
+
76
+ # Citation
77
+
78
+ ```bibtex
79
+ @misc{park2024loopus,
80
+ title={LoopUS: Recasting Pretrained LLMs into Looped Latent Refinement Models},
81
+ author={Taekhyun Park and Yongjae Lee and Dohee Kim and Hyerim Bae},
82
+ year={2024},
83
+ eprint={2605.11011},
84
+ archivePrefix={arXiv},
85
+ primaryClass={cs.CL},
86
+ url={https://arxiv.org/abs/2605.11011},
87
+ }
88
+ ```