Thrillcrazyer's picture
Add pipeline tag and improve model card (#1)
f7aafe8
metadata
base_model:
  - TinyLlama/TinyLlama_v1.1
datasets:
  - HuggingFaceFW/fineweb-edu
license: apache-2.0
model_name: TinyLlama_v1.1_LoopUS
tags:
  - LoopUS
  - LoopedTransformers
pipeline_tag: text-generation

LoopUS:
Recasting Pretrained LLMs into Looped Latent Refinement Models

BAELAB, Pusan National University, Busan, Korea
DOLAB, Changwon National University, Changwon, Korea

Taekhyun Park1, Yongjae Lee1, Dohee Kim2, Hyerim Bae1,โ€ 

๐ŸŒŸ Github | ๐ŸŒ Project Page | ๐Ÿ“„ Paper

Abstract

Looped computation shows promise in improving the reasoning-oriented performance of LLMs by scaling test-time compute. We introduce Looped Depth Up-Scaling (LoopUS), a post-training framework that converts a standard pretrained LLM into a looped architecture. LoopUS recasts the pretrained LLM into an encoder, a looped reasoning block, and a decoder. It operationalizes this latent-refinement architecture through block decomposition, an input-dependent selective gate, random deep supervision, and a confidence head for adaptive early exiting. Through stable latent looping, LoopUS improves reasoning-oriented performance without extending the generated traces or requiring recurrent training from scratch.

QuickStart

To use LoopUS, clone the repository and run the chat script:

git clone https://github.com/Thrillcrazyer/LoopUS.git
cd LoopUS
uv run chat.py

Illustration of LoopUS

Citation

If you find this work useful, please cite:

@misc{park2026loopus,
      title={LoopUS: Recasting Pretrained LLMs into Looped Latent Refinement Models}, 
      author={Taekhyun Park and Yongjae Lee and Dohee Kim and Hyerim Bae},
      year={2026},
      eprint={2605.11011},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2605.11011}, 
}