Add pipeline tag and improve model card

#1
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +27 -7
README.md CHANGED
@@ -1,14 +1,16 @@
1
  ---
2
- license: apache-2.0
 
3
  datasets:
4
  - HuggingFaceFW/fineweb-edu
 
5
  model_name: TinyLlama_v1.1_LoopUS
6
- base_model:
7
- - TinyLlama/TinyLlama_v1.1
8
  tags:
9
  - LoopUS
10
  - LoopedTransformers
 
11
  ---
 
12
  <div align="center">
13
  <h1>LoopUS: <br> Recasting Pretrained LLMs into Looped Latent Refinement Models</h1>
14
  </div>
@@ -26,18 +28,20 @@ tags:
26
  </p>
27
 
28
  <p align="center">
29
- <a href="https://thrillcrazyer.github.io/LoopUS"><b>🌟 Github</b></a> |
30
- <a href="https://huggingface.co/Thrillcrazyer/Qwen3_1.7B_LoopUS"><b>πŸ“₯ Download</b></a> |
31
  <a href="https://arxiv.org/abs/2605.11011"><b>πŸ“„ Paper</b></a>
32
  </p>
33
 
34
 
35
  # Abstract
36
 
37
- Looped computation shows promise in improving the reasoning-oriented performance of LLMs by scaling test-time compute. However, existing approaches typically require either training recurrent models from scratch or applying disruptive retrofits, which involve substantial computational costs and may compromise pretrained capabilities. To address these limitations, we introduce \textbf{Looped Depth Up-Scaling} (LoopUS), a post-training framework that converts a standard pretrained LLM into a looped architecture. As a key technical contribution, LoopUS recasts the pretrained LLM into an encoder, a looped reasoning block, and a decoder. It operationalizes this latent-refinement architecture through four core components: (1) block decomposition, guided by staged representation dynamics; (2) an input-dependent selective gate to mitigate hidden-state drift; (3) random deep supervision for memory-efficient learning over long recursive horizons; and (4) a confidence head for adaptive early exiting. Collectively, these mechanisms transform a standard non-looped model into a looped form while stabilizing it against both computational bottlenecks and representation collapse. Through stable latent looping, LoopUS improves reasoning-oriented performance without extending the generated traces or requiring recurrent training from scratch.
38
 
39
  # QuickStart
40
 
 
 
41
  ```bash
42
  git clone https://github.com/Thrillcrazyer/LoopUS.git
43
  cd LoopUS
@@ -48,4 +52,20 @@ uv run chat.py
48
 
49
  <div align="center">
50
  <img src="https://raw.githubusercontent.com/Thrillcrazyer/LoopUS/main/assets/Framework.png" width="800"/>
51
- </div>
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ base_model:
3
+ - TinyLlama/TinyLlama_v1.1
4
  datasets:
5
  - HuggingFaceFW/fineweb-edu
6
+ license: apache-2.0
7
  model_name: TinyLlama_v1.1_LoopUS
 
 
8
  tags:
9
  - LoopUS
10
  - LoopedTransformers
11
+ pipeline_tag: text-generation
12
  ---
13
+
14
  <div align="center">
15
  <h1>LoopUS: <br> Recasting Pretrained LLMs into Looped Latent Refinement Models</h1>
16
  </div>
 
28
  </p>
29
 
30
  <p align="center">
31
+ <a href="https://github.com/Thrillcrazyer/LoopUS"><b>🌟 Github</b></a> |
32
+ <a href="https://thrillcrazyer.github.io/LoopUS"><b>🌐 Project Page</b></a> |
33
  <a href="https://arxiv.org/abs/2605.11011"><b>πŸ“„ Paper</b></a>
34
  </p>
35
 
36
 
37
  # Abstract
38
 
39
+ Looped computation shows promise in improving the reasoning-oriented performance of LLMs by scaling test-time compute. We introduce **Looped Depth Up-Scaling** (LoopUS), a post-training framework that converts a standard pretrained LLM into a looped architecture. LoopUS recasts the pretrained LLM into an encoder, a looped reasoning block, and a decoder. It operationalizes this latent-refinement architecture through block decomposition, an input-dependent selective gate, random deep supervision, and a confidence head for adaptive early exiting. Through stable latent looping, LoopUS improves reasoning-oriented performance without extending the generated traces or requiring recurrent training from scratch.
40
 
41
  # QuickStart
42
 
43
+ To use LoopUS, clone the repository and run the chat script:
44
+
45
  ```bash
46
  git clone https://github.com/Thrillcrazyer/LoopUS.git
47
  cd LoopUS
 
52
 
53
  <div align="center">
54
  <img src="https://raw.githubusercontent.com/Thrillcrazyer/LoopUS/main/assets/Framework.png" width="800"/>
55
+ </div>
56
+
57
+ # Citation
58
+
59
+ If you find this work useful, please cite:
60
+
61
+ ```bibtex
62
+ @misc{park2026loopus,
63
+ title={LoopUS: Recasting Pretrained LLMs into Looped Latent Refinement Models},
64
+ author={Taekhyun Park and Yongjae Lee and Dohee Kim and Hyerim Bae},
65
+ year={2026},
66
+ eprint={2605.11011},
67
+ archivePrefix={arXiv},
68
+ primaryClass={cs.CL},
69
+ url={https://arxiv.org/abs/2605.11011},
70
+ }
71
+ ```