--- license: mit language: - en tags: - image-restoration - super-resolution - visual-autoregressive - pytorch --- # VARestorer: One-Step VAR Distillation for Real-World Image Super-Resolution (ICLR 2026)
VARestorer Logo

πŸ“„ Paper    πŸ“ arXiv    🏠 Project Page    πŸ’» Code

**[Yixuan Zhu*](https://eternalevan.github.io/), [Shilin Ma*](https://github.com/cyp336/), [Haolin Wang](https://howlin-wang.github.io/), [Ao Li](https://rammusleo.github.io/), Yanzhe Jing, [Yansong Tang†](https://andytang15.github.io/), [Lei Chen](https://scholar.google.com/citations?user=8bMh-FQAAAAJ&hl=zh-CN&oi=sra), [Jiwen Lu](http://ivg.au.tsinghua.edu.cn/Jiwen_Lu/), [Jie Zhou](https://scholar.google.com/citations?user=6a79aPwAAAAJ&hl=en)** (* Equal contribution   † Corresponding author) Tsinghua University
VARestorer is the official Hugging Face model repository for the ICLR 2026 paper **"VARestorer: One-Step VAR Distillation for Real-World Image Super-Resolution."** It distills a pre-trained text-to-image visual autoregressive (VAR) model into a **single-step** real-world image super-resolution system. This Hugging Face repository includes the released checkpoint assets together with a runnable snapshot of the official codebase for convenient download and inference. The primary development home remains the official [GitHub repository](https://github.com/EternalEvan/VARestorer). ## Real-World Restoration at a Glance | ![](./assets/teaser_car.webp) | ![](./assets/teaser_field.webp) | ![](./assets/teaser_corgi.webp) | |:---:|:---:|:---:| | Street Scene | Landscape | Corgi Portrait |

Left half: real degraded input  |  Right half: VARestorer one-step output.
Want to drag the divider yourself? → Try the interactive slider on the project page.

| **1 step** | **0.23 s** | **~10Γ— faster** | **27.3 M params** | | :---: | :---: | :---: | :---: | | one-pass inference | per 512Γ—512 image | than VAR baseline | trainable (1.2% of total) | ## Pipeline ![](./assets/pipeline.png) ## Download and Use You can either clone this Hugging Face repository directly or use the primary GitHub repository. The commands below assume the usual GitHub workflow, but the same directory layout is now mirrored here as well. 1. Clone the repository and install the dependencies: ```bash # Option A: clone the primary GitHub repository git clone https://github.com/EternalEvan/VARestorer.git cd VARestorer pip install -r requirements.txt pip install --no-build-isolation git+https://github.com/cloneofsimo/lora.git pip install --no-build-isolation flash_attn==2.8.3 ``` If you prefer to clone the Hugging Face mirror instead, use: ```bash git clone https://huggingface.co/EternalEvan/VARestorer cd VARestorer pip install -r requirements.txt pip install --no-build-isolation git+https://github.com/cloneofsimo/lora.git pip install --no-build-isolation flash_attn==2.8.3 ``` 2. Download the main checkpoint from this repository ([weights](https://huggingface.co/EvanEternal/VARestorer/tree/main/weights)) or from [Google Drive](https://drive.google.com/file/d/1NkwlvNfr7nOkN45VWmO-PXbJZ8Nkt2_l/view?usp=drive_link): ```bash huggingface-cli download EvanEternal/VARestorer varestorer.pth --local-dir ./weights ``` 3. Download the additional dependencies required by the official release: - [`google/flan-t5-xl`](https://huggingface.co/google/flan-t5-xl) into `./weights/flan-t5-xl` - [`lxq007/DiffBIR`](https://huggingface.co/lxq007/DiffBIR/blob/main/general_swinir_v1.ckpt) as `./weights/general_swinir_v1.ckpt` - [`FoundationVision/Infinity`](https://huggingface.co/FoundationVision/Infinity/blob/main/infinity_vae_d32reg.pth) as `./weights/infinity_vae_d32reg.pth` 4. Run inference: ```bash bash scripts/infer.sh ``` For the latest updates, issue tracking, and future development, please refer to the [official GitHub repository](https://github.com/EternalEvan/VARestorer). ## Links - [Paper (OpenReview)](https://openreview.net/forum?id=T2Oihh7zN8) - [arXiv](http://arxiv.org/abs/2604.21450) - [Project Page](https://eternalevan.github.io/VARestorer-proj/) - [Code Repository](https://github.com/EternalEvan/VARestorer) ## Citation ```bibtex @inproceedings{zhu2026varestorer, title = {VARestorer: One-Step VAR Distillation for Real-World Image Super-Resolution}, author = {Zhu, Yixuan and Ma, Shilin and Wang, Haolin and Li, Ao and Jing, Yanzhe and Tang, Yansong and Chen, Lei and Lu, Jiwen and Zhou, Jie}, booktitle = {International Conference on Learning Representations (ICLR)}, year = {2026}, url = {https://openreview.net/forum?id=T2Oihh7zN8} } ```