Add model card for BARD-VL

#3
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +44 -0
README.md ADDED
@@ -0,0 +1,44 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ pipeline_tag: image-text-to-text
3
+ ---
4
+
5
+ # BARD-VL
6
+
7
+ [BARD](https://huggingface.co/papers/2604.16514) is a simple and effective bridging framework that converts a pretrained autoregressive vision-language model (VLM) into a same-architecture, decoding-efficient large-block diffusion VLM (dVLM).
8
+
9
+ - **Paper:** [BARD: Bridging AutoRegressive and Diffusion Vision-Language Models Via Highly Efficient Progressive Block Merging and Stage-Wise Distillation](https://huggingface.co/papers/2604.16514)
10
+ - **Project Page:** [https://fudan-generative-vision.github.io/Bard-VL](https://fudan-generative-vision.github.io/Bard-VL)
11
+ - **Repository:** [https://github.com/fudan-generative-vision/Bard-VL](https://github.com/fudan-generative-vision/Bard-VL)
12
+
13
+ ## Method Overview
14
+
15
+ BARD combines progressive supervised block merging, which gradually enlarges the decoding block size, with stage-wise intra-dVLM distillation from a fixed small-block diffusion anchor to recover performance lost at larger blocks. BARD-VL establishes a new state-of-the-art among comparable-scale open dVLMs while achieving up to 3$\times$ decoding throughput speedup compared to the source autoregressive model.
16
+
17
+ ## Inference
18
+
19
+ To use the model for inference, please follow the installation instructions in the [official repository](https://github.com/fudan-generative-vision/Bard-VL). You can then run the provided `inference.py` script for image and video understanding:
20
+
21
+ ```bash
22
+ python3 inference.py \
23
+ --model_id fudan-generative-ai/Bard-VL-B4-Mask-4B-Instruct \
24
+ --block_size 4 \
25
+ --denoising_steps 4 \
26
+ --confidence_threshold 0.6
27
+ ```
28
+
29
+ ## Citation
30
+
31
+ If you find BARD-VL useful in your research, please cite the following paper:
32
+
33
+ ```bibtex
34
+ @article{chen2026bard,
35
+ title={BARD: Bridging AutoRegressive and Diffusion Vision-Language Models Via Highly Efficient Progressive Block Merging and Stage-Wise Distillation},
36
+ author={Chen, Baoyou and Xia, Hanchen and Tu, Peng and Shi, Haojun and Mu, Shan and Yuan, Weihao and Zhu, Siyu},
37
+ journal={arXiv preprint arXiv:2604.16514},
38
+ year={2026}
39
+ }
40
+ ```
41
+
42
+ ## Acknowledgements
43
+
44
+ This repository builds on top of [NVIDIA NeMo AutoModel](https://github.com/NVIDIA-NeMo/Automodel).