Datasets:
image imagewidth (px) 256 3.3k | label class label 2
classes |
|---|---|
0assets | |
0assets | |
1data | |
1data | |
1data | |
1data | |
1data | |
1data | |
1data | |
1data | |
1data | |
1data | |
1data | |
1data | |
1data | |
1data | |
1data | |
1data | |
1data | |
1data | |
1data | |
1data | |
1data | |
1data | |
1data | |
1data | |
1data | |
1data | |
1data | |
1data | |
1data | |
1data | |
1data | |
1data | |
1data | |
1data | |
1data | |
1data | |
1data | |
1data | |
1data | |
1data | |
1data | |
1data | |
1data | |
1data | |
1data | |
1data | |
1data | |
1data | |
1data | |
1data | |
1data | |
1data | |
1data | |
1data | |
1data | |
1data | |
1data | |
1data | |
1data | |
1data | |
1data | |
1data | |
1data | |
1data | |
1data | |
1data | |
1data | |
1data | |
1data | |
1data | |
1data | |
1data | |
1data | |
1data | |
1data | |
1data | |
1data | |
1data | |
1data | |
1data | |
1data | |
1data | |
1data | |
1data | |
1data | |
1data | |
1data | |
1data | |
1data | |
1data | |
1data | |
1data | |
1data | |
1data | |
1data | |
1data | |
1data | |
1data |
π Overview
Introduction
We propose OmniDoc-TokenBench in Qwen-Image-VAE-2.0, a curated benchmark specifically designed to evaluate VAE reconstruction on text-rich document images. It contains ~3K samples spanning nine categories (book, slides, color textbook, exam paper, academic paper, magazine, financial report, newspaper, note) in both English and Chinese, alongside an evaluation toolkit supporting PSNR, SSIM, LPIPS, FID, and OCR-based NED metrics.
We develop OmniDoc-TokenBench based on OmniDocBench. We crop each sample from a text block and resize it to 256Γ256, then filter for a character count range ([200, 600] for Chinese, [300, 600] for English) to ensure a reference font size of approximately 16px and 10px, respectively. We deduplicate via n-gram overlap and manually inspect for quality.
Evaluation Metric
Beyond traditional metrics (PSNR, SSIM, LPIPS, FID), we use NED (Normalized Edit Distance) as the primary text-fidelity metric. NED directly measures text preservation by comparing recognized character sequences between original and reconstructed images using Levenshtein distance:
NED is sensitive to semantic corruption such as character substitutions, making it a necessary complementary metric when traditional metrics alone are insufficient.
π Performance
We conduct a comprehensive evaluation on OmniDoc-TokenBench (~3K text-rich images, 256Γ256 resolution). Models are grouped by spatial compression factor and sorted by NED within each group.
Our Qwen-Image-VAE-2.0 achieves state-of-the-art reconstruction across all compression ratios. The f16c128 variant attains SSIM 0.9706 and PSNR 30.45 dB, surpassing the best f8 baseline (FLUX.1-dev at 0.9364 / 26.24 dB) despite 2Γ higher spatial compression. In terms of text fidelity (NED), f16c128 reaches 0.9617, exceeding all evaluated VAEs. Even under extreme f32 compression, our f32c192 achieves NED 0.8555, surpassing multiple f16 baselines.
β‘ Evaluation
For evaluation scripts and usage, see our GitHub Repository
π Citation
If you use OmniDoc-TokenBench or this evaluation toolkit in your research, please cite:
@misc{zhang2026qwenimagevae20technicalreport,
title={Qwen-Image-VAE-2.0 Technical Report},
author={Zekai Zhang and Deqing Li and Kuan Cao and Yujia Wu and Chenfei Wu and Yu Wu and Liang Peng and Hao Meng and Jiahao Li and Jie Zhang and Kaiyuan Gao and Kun Yan and Lihan Jiang and Ningyuan Tang and Shengming Yin and Tianhe Wu and Xiao Xu and Xiaoyue Chen and Yan Shu and Yanran Zhang and Yilei Chen and Yixian Xu and Yuxiang Chen and Zhendong Wang and Zihao Liu and Zikai Zhou and Yiliang Gu and Yi Wang and Xiaoxiao Xu and Lin Qu},
year={2026},
eprint={2605.13565},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2605.13565},
}
Acknowledgements
OmniDoc-TokenBench is a derivative dataset based on OmniDocBench, Thanks for their great work.
License
This dataset is developed by the Qwen Team at Alibaba Group, and licensed under the Apache License 2.0.
- Downloads last month
- 1,241