Add model card for BARD-VL

#4
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +46 -0
README.md ADDED
@@ -0,0 +1,46 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ pipeline_tag: image-text-to-text
3
+ ---
4
+
5
+ # BARD-VL
6
+
7
+ BARD (Bridging AutoRegressive and Diffusion) is a framework that converts a pretrained autoregressive vision-language model (VLM) into a decoding-efficient diffusion VLM (dVLM). By using progressive supervised block merging and stage-wise distillation, BARD-VL achieves significant decoding throughput speedups (up to 3$\times$) compared to the source autoregressive models while maintaining high quality in multimodal tasks.
8
+
9
+ - **Paper:** [BARD: Bridging AutoRegressive and Diffusion Vision-Language Models Via Highly Efficient Progressive Block Merging and Stage-Wise Distillation](https://huggingface.co/papers/2604.16514)
10
+ - **Repository:** [https://github.com/fudan-generative-vision/Bard-VL](https://github.com/fudan-generative-vision/Bard-VL)
11
+ - **Project Page:** [https://fudan-generative-vision.github.io/Bard-VL](https://fudan-generative-vision.github.io/Bard-VL)
12
+
13
+ ## Method Overview
14
+
15
+ The BARD framework introduces two main stages to bridge the gap between autoregressive and diffusion paradigms:
16
+ 1. **Progressive Block Merging (PBM):** Gradually enlarges the decoding block size.
17
+ 2. **Stage-Wise Distillation (SWD):** Intra-dVLM distillation from a fixed small-block diffusion anchor to recover performance lost at larger blocks.
18
+
19
+ ## Usage
20
+
21
+ To use BARD-VL, please clone the [official repository](https://github.com/fudan-generative-vision/Bard-VL) and follow the installation instructions. You can then run inference for image and video understanding using the provided `inference.py` script:
22
+
23
+ ```bash
24
+ python3 inference.py \
25
+ --model_id <path_to_model_checkpoint> \
26
+ --block_size 4 \
27
+ --denoising_steps 4 \
28
+ --confidence_threshold 0.6
29
+ ```
30
+
31
+ ## Citation
32
+
33
+ If you find BARD-VL useful in your research, please cite the following paper:
34
+
35
+ ```bibtex
36
+ @article{chen2026bard,
37
+ title={BARD: Bridging AutoRegressive and Diffusion Vision-Language Models Via Highly Efficient Progressive Block Merging and Stage-Wise Distillation},
38
+ author={Chen, Baoyou and Xia, Hanchen and Tu, Peng and Shi, Haojun and Mu, Shan and Yuan, Weihao and Zhu, Siyu},
39
+ journal={arXiv preprint arXiv:2604.16514},
40
+ year={2026}
41
+ }
42
+ ```
43
+
44
+ ## Acknowledgements
45
+
46
+ This project builds on top of [NVIDIA NeMo AutoModel](https://github.com/NVIDIA-NeMo/Automodel).