Authors

  • Hengyu Shi
  • Boynn

Fine-tuned CLIP-ViT-bigG-14 Model

This model is a fine-tuned version based on laion/CLIP-ViT-bigG-14-laion2B-39B-b160k.

Usage Method

base_model = CLIPTextModelWithProjection.from_pretrained("Boynn/CLIP-ViT-bigG-14-laion2B-39B-b160k-sft")

Downloads last month
9
Safetensors
Model size
0.7B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support