Thrillcrazyer's picture
Add pipeline tag and improve model card (#1)
f7aafe8
---
base_model:
- TinyLlama/TinyLlama_v1.1
datasets:
- HuggingFaceFW/fineweb-edu
license: apache-2.0
model_name: TinyLlama_v1.1_LoopUS
tags:
- LoopUS
- LoopedTransformers
pipeline_tag: text-generation
---
<div align="center">
<h1>LoopUS: <br> Recasting Pretrained LLMs into Looped Latent Refinement Models</h1>
</div>
<p align="center">
<a href="https://pnubaelab.github.io/"><b>BAELAB</b></a>, Pusan National University, Busan, Korea <br>
<a href="https://aidoheekim.github.io/"><b>DOLAB</b></a>, Changwon National University, Changwon, Korea
</p>
<p align="center">
<a href="https://thrillcrazyer.github.io/" target="_blank"><strong>Taekhyun Park</strong></a><sup>1</sup>,
<a href="https://yongzzai.com/" target="_blank"><strong>Yongjae Lee</strong></a><sup>1</sup>,
<a href="https://aidoheekim.github.io/" target="_blank"><strong>Dohee Kim</strong></a><sup>2</sup>,
<a href="https://pnubaelab.github.io/" target="_blank"><strong>Hyerim Bae</string></a><sup>1,&dagger;</sup>
</p>
<p align="center">
<a href="https://github.com/Thrillcrazyer/LoopUS"><b>🌟 Github</b></a> |
<a href="https://thrillcrazyer.github.io/LoopUS"><b>🌐 Project Page</b></a> |
<a href="https://arxiv.org/abs/2605.11011"><b>πŸ“„ Paper</b></a>
</p>
# Abstract
Looped computation shows promise in improving the reasoning-oriented performance of LLMs by scaling test-time compute. We introduce **Looped Depth Up-Scaling** (LoopUS), a post-training framework that converts a standard pretrained LLM into a looped architecture. LoopUS recasts the pretrained LLM into an encoder, a looped reasoning block, and a decoder. It operationalizes this latent-refinement architecture through block decomposition, an input-dependent selective gate, random deep supervision, and a confidence head for adaptive early exiting. Through stable latent looping, LoopUS improves reasoning-oriented performance without extending the generated traces or requiring recurrent training from scratch.
# QuickStart
To use LoopUS, clone the repository and run the chat script:
```bash
git clone https://github.com/Thrillcrazyer/LoopUS.git
cd LoopUS
uv run chat.py
```
# Illustration of LoopUS
<div align="center">
<img src="https://raw.githubusercontent.com/Thrillcrazyer/LoopUS/main/assets/Framework.png" width="800"/>
</div>
# Citation
If you find this work useful, please cite:
```bibtex
@misc{park2026loopus,
title={LoopUS: Recasting Pretrained LLMs into Looped Latent Refinement Models},
author={Taekhyun Park and Yongjae Lee and Dohee Kim and Hyerim Bae},
year={2026},
eprint={2605.11011},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2605.11011},
}
```