ColQwen2.5 XTR Pruned
Trained with XTR methodology + LoRA + Token Pruning
Training Info
- Final m_i: 0.5811
- Epoch: 1
- Batch: 1092
- LoRA rank: 32
- Pruning ratio: 0.5
Usage
from peft import PeftModel
from colpali_engine.models import ColQwen2, ColQwen2Processor
base_model = ColQwen2.from_pretrained("vidore/colqwen2-v0.1")
model = PeftModel.from_pretrained(base_model, "DungND1107/colqwen2_5_xtr_pruned")
processor = ColQwen2Processor.from_pretrained("DungND1107/colqwen2_5_xtr_pruned")
- Downloads last month
- 2
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support