Thrillcrazyer commited on
Commit
00d986f
·
verified ·
1 Parent(s): 9398a99

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +47 -0
README.md ADDED
@@ -0,0 +1,47 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ datasets:
4
+ - HuggingFaceFW/fineweb-edu
5
+ model_name: Qwen3_1.7B_LoopUS_SFT
6
+ base_model:
7
+ - microsoft/phi-4
8
+ ---
9
+ <div align="center">
10
+ <h1>LoopUS: <br> Recasting Pretrained LLMs into Looped Latent Refinement Models</h1>
11
+ </div>
12
+
13
+ <p align="center">
14
+ <a href="https://pnubaelab.github.io/"><b>BAELAB</b></a>, Pusan National University, Busan, Korea <br>
15
+ <a href="https://aidoheekim.github.io/"><b>DOLAB</b></a>, Changwon National University, Changwon, Korea
16
+ </p>
17
+
18
+ <p align="center">
19
+ <a href="https://thrillcrazyer.github.io/" target="_blank"><strong>Taekhyun Park</strong></a><sup>1</sup>,
20
+ <a href="https://yongzzai.com/" target="_blank"><strong>Yongjae Lee</strong></a><sup>1</sup>,
21
+ <a href="https://aidoheekim.github.io/" target="_blank"><strong>Dohee Kim</strong></a><sup>2</sup>,
22
+ <a href="https://pnubaelab.github.io/" target="_blank"><strong>Hyerim Bae</string></a><sup>1,&dagger;</sup>
23
+ </p>
24
+
25
+ <p align="center">
26
+ <a href="https://thrillcrazyer.github.io/LoopUS"><b>🌟 Github</b></a> |
27
+ <a href="https://huggingface.co/Thrillcrazyer/Qwen3_1.7B_LoopUS"><b>📥 Download</b></a> |
28
+ </p>
29
+
30
+
31
+ # Abstract
32
+
33
+ Looped computation shows promise in improving the reasoning-oriented performance of LLMs by scaling test-time compute. However, existing approaches typically require either training recurrent models from scratch or applying disruptive retrofits, which involve substantial computational costs and may compromise pretrained capabilities. To address these limitations, we introduce \textbf{Looped Depth Up-Scaling} (LoopUS), a post-training framework that converts a standard pretrained LLM into a looped architecture. As a key technical contribution, LoopUS recasts the pretrained LLM into an encoder, a looped reasoning block, and a decoder. It operationalizes this latent-refinement architecture through four core components: (1) block decomposition, guided by staged representation dynamics; (2) an input-dependent selective gate to mitigate hidden-state drift; (3) random deep supervision for memory-efficient learning over long recursive horizons; and (4) a confidence head for adaptive early exiting. Collectively, these mechanisms transform a standard non-looped model into a looped form while stabilizing it against both computational bottlenecks and representation collapse. Through stable latent looping, LoopUS improves reasoning-oriented performance without extending the generated traces or requiring recurrent training from scratch.
34
+
35
+ # QuickStart
36
+
37
+ ```bash
38
+ git clone https://github.com/Thrillcrazyer/LoopUS.git
39
+ cd LoopUS
40
+ uv run chat.py
41
+ ```
42
+
43
+ # Illustration of LoopUS
44
+
45
+ <div align="center">
46
+ <img src="https://raw.githubusercontent.com/Thrillcrazyer/LoopUS/main/assets/Framework.png" width="800"/>
47
+ </div>