Instructions to use mi1k7/Qwen2-VL-7B-SRT with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use mi1k7/Qwen2-VL-7B-SRT with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("feature-extraction", model="mi1k7/Qwen2-VL-7B-SRT")# Load model directly from transformers import AutoProcessor, AutoModel processor = AutoProcessor.from_pretrained("mi1k7/Qwen2-VL-7B-SRT") model = AutoModel.from_pretrained("mi1k7/Qwen2-VL-7B-SRT") - Notebooks
- Google Colab
- Kaggle
- Xet hash:
- 9634a13dce23097940ed18824bf83f15fb90710cb158f7ae03a04ac6043c6ba6
- Size of remote file:
- 11.4 MB
- SHA256:
- 091aa7594dc2fcfbfa06b9e3c22a5f0562ac14f30375c13af7309407a0e67b8a
·
Xet efficiently stores Large Files inside Git, intelligently splitting files into unique chunks and accelerating uploads and downloads. More info.