File size: 2,711 Bytes
1bfefeb
f7aafe8
 
1bfefeb
 
f7aafe8
1bfefeb
dcf2d58
 
 
f7aafe8
1bfefeb
f7aafe8
1bfefeb
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
f7aafe8
 
9415068
1bfefeb
 
 
 
 
f7aafe8
1bfefeb
 
 
f7aafe8
 
1bfefeb
 
 
 
 
 
 
 
 
 
f7aafe8
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
---
base_model:
- TinyLlama/TinyLlama_v1.1
datasets:
- HuggingFaceFW/fineweb-edu
license: apache-2.0
model_name: TinyLlama_v1.1_LoopUS
tags:
- LoopUS
- LoopedTransformers
pipeline_tag: text-generation
---

<div align="center">
<h1>LoopUS: <br> Recasting Pretrained LLMs into Looped Latent Refinement Models</h1>
</div>

<p align="center">
  <a href="https://pnubaelab.github.io/"><b>BAELAB</b></a>, Pusan National University, Busan, Korea <br>
  <a href="https://aidoheekim.github.io/"><b>DOLAB</b></a>, Changwon National University, Changwon, Korea
</p>

<p align="center">
  <a href="https://thrillcrazyer.github.io/" target="_blank"><strong>Taekhyun Park</strong></a><sup>1</sup>,
  <a href="https://yongzzai.com/" target="_blank"><strong>Yongjae Lee</strong></a><sup>1</sup>,
  <a href="https://aidoheekim.github.io/" target="_blank"><strong>Dohee Kim</strong></a><sup>2</sup>,
  <a href="https://pnubaelab.github.io/" target="_blank"><strong>Hyerim Bae</string></a><sup>1,&dagger;</sup>
</p>

<p align="center">
  <a href="https://github.com/Thrillcrazyer/LoopUS"><b>๐ŸŒŸ Github</b></a> |
  <a href="https://thrillcrazyer.github.io/LoopUS"><b>๐ŸŒ Project Page</b></a> |
  <a href="https://arxiv.org/abs/2605.11011"><b>๐Ÿ“„ Paper</b></a> 
</p>


# Abstract

Looped computation shows promise in improving the reasoning-oriented performance of LLMs by scaling test-time compute. We introduce **Looped Depth Up-Scaling** (LoopUS), a post-training framework that converts a standard pretrained LLM into a looped architecture. LoopUS recasts the pretrained LLM into an encoder, a looped reasoning block, and a decoder. It operationalizes this latent-refinement architecture through block decomposition, an input-dependent selective gate, random deep supervision, and a confidence head for adaptive early exiting. Through stable latent looping, LoopUS improves reasoning-oriented performance without extending the generated traces or requiring recurrent training from scratch.

# QuickStart

To use LoopUS, clone the repository and run the chat script:

```bash
git clone https://github.com/Thrillcrazyer/LoopUS.git
cd LoopUS
uv run chat.py
```

# Illustration of LoopUS

<div align="center">
<img src="https://raw.githubusercontent.com/Thrillcrazyer/LoopUS/main/assets/Framework.png" width="800"/>
</div>

# Citation

If you find this work useful, please cite:

```bibtex
@misc{park2026loopus,
      title={LoopUS: Recasting Pretrained LLMs into Looped Latent Refinement Models}, 
      author={Taekhyun Park and Yongjae Lee and Dohee Kim and Hyerim Bae},
      year={2026},
      eprint={2605.11011},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2605.11011}, 
}
```