Use badge links and add model section
Browse files
README.md
CHANGED
|
@@ -2,7 +2,11 @@
|
|
| 2 |
|
| 3 |
CropVLM is a CLIP-based zero-shot image classifier adapted for crop and fruit recognition. It compares one image embedding against text embeddings for candidate class names, then returns the class with the highest cosine similarity.
|
| 4 |
|
| 5 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
| 6 |
|
| 7 |

|
| 8 |
|
|
@@ -55,9 +59,11 @@ pip install --index-url https://download.pytorch.org/whl/cu121 torch torchvision
|
|
| 55 |
pip install -r requirements.txt
|
| 56 |
```
|
| 57 |
|
| 58 |
-
##
|
| 59 |
|
| 60 |
-
This Hugging Face repository includes the CropVLM
|
|
|
|
|
|
|
| 61 |
|
| 62 |
```text
|
| 63 |
models/CropVLM.pth
|
|
|
|
| 2 |
|
| 3 |
CropVLM is a CLIP-based zero-shot image classifier adapted for crop and fruit recognition. It compares one image embedding against text embeddings for candidate class names, then returns the class with the highest cosine similarity.
|
| 4 |
|
| 5 |
+
<p align="center">
|
| 6 |
+
<a href="https://arxiv.org/abs/XXXX.XXXXX"><img src="https://img.shields.io/badge/arXiv-Paper-b31b1b.svg" alt="Paper"></a>
|
| 7 |
+
<a href="https://github.com/boudiafA/CropVLM"><img src="https://img.shields.io/badge/GitHub-Repository-181717.svg" alt="GitHub"></a>
|
| 8 |
+
<a href="https://huggingface.co/boudiafA/CropVLM"><img src="https://img.shields.io/badge/%F0%9F%A4%97%20Model-Hugging%20Face-FFD21E" alt="Model"></a>
|
| 9 |
+
</p>
|
| 10 |
|
| 11 |

|
| 12 |
|
|
|
|
| 59 |
pip install -r requirements.txt
|
| 60 |
```
|
| 61 |
|
| 62 |
+
## Model
|
| 63 |
|
| 64 |
+
This Hugging Face repository includes the CropVLM model weights: [boudiafA/CropVLM](https://huggingface.co/boudiafA/CropVLM).
|
| 65 |
+
|
| 66 |
+
The checkpoint is stored at:
|
| 67 |
|
| 68 |
```text
|
| 69 |
models/CropVLM.pth
|