Add model card for BARD-VL

#3
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +46 -0
README.md ADDED
@@ -0,0 +1,46 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ pipeline_tag: image-text-to-text
3
+ ---
4
+
5
+ # BARD-VL
6
+
7
+ BARD (Bridging AutoRegressive and Diffusion) is a framework designed to convert pretrained autoregressive vision-language models (VLMs) into decoding-efficient diffusion VLMs (dVLMs). By combining progressive supervised block merging with stage-wise intra-dVLM distillation, BARD-VL maintains the strong multimodal capabilities of models like Qwen3-VL while achieving up to 3× decoding throughput speedup.
8
+
9
+ - **Paper:** [BARD: Bridging AutoRegressive and Diffusion Vision-Language Models Via Highly Efficient Progressive Block Merging and Stage-Wise Distillation](https://huggingface.co/papers/2604.16514)
10
+ - **Repository:** [https://github.com/fudan-generative-vision/Bard-VL](https://github.com/fudan-generative-vision/Bard-VL)
11
+ - **Project Page:** [https://fudan-generative-vision.github.io/Bard-VL](https://fudan-generative-vision.github.io/Bard-VL)
12
+
13
+ ## Model Description
14
+
15
+ Autoregressive VLMs offer strong multimodal capability, but their token-by-token decoding imposes an inference bottleneck. BARD addresses this by converting these models into same-architecture, large-block diffusion VLMs. Key components include:
16
+ - **Progressive Supervised Block Merging (PBM):** Gradually enlarges the decoding block size.
17
+ - **Stage-Wise Distillation (SWD):** Recovers performance lost at larger blocks using a fixed small-block diffusion anchor.
18
+
19
+ Experimental results show that BARD-VL establishes new state-of-the-art performance among comparable-scale open dVLMs at 4B and 8B scales.
20
+
21
+ ## Usage
22
+
23
+ To use this model, please follow the installation and setup instructions in the [official GitHub repository](https://github.com/fudan-generative-vision/Bard-VL). You can run inference for image or video understanding using the provided `inference.py` script:
24
+
25
+ ```bash
26
+ python3 inference.py \
27
+ --model_id fudan-generative-ai/Bard-VL-B4-Mask-4B-Instruct \
28
+ --block_size 4 \
29
+ --denoising_steps 4 \
30
+ --confidence_threshold 0.6
31
+ ```
32
+
33
+ ## Citation
34
+
35
+ ```bibtex
36
+ @article{chen2026bard,
37
+ title={BARD: Bridging AutoRegressive and Diffusion Vision-Language Models Via Highly Efficient Progressive Block Merging and Stage-Wise Distillation},
38
+ author={Chen, Baoyou and Xia, Hanchen and Tu, Peng and Shi, Haojun and Mu, Shan and Yuan, Weihao and Zhu, Siyu},
39
+ journal={arXiv preprint arXiv:2604.16514},
40
+ year={2026}
41
+ }
42
+ ```
43
+
44
+ ## Acknowledgements
45
+
46
+ This repository builds on top of [NVIDIA NeMo AutoModel](https://github.com/NVIDIA-NeMo/Automodel).