YAML Metadata Warning:empty or missing yaml metadata in repo card

Check out the documentation for more information.

Cotton Disease Detection - Qwen2.5-VL Models

Projekt-Übersicht

Vision-Language Model für Cotton Disease Detection auf Drohnen (Jetson Orin 64GB). Binäre Klassifikation: healthy vs diseased


Ergebnisse

Model Accuracy Inference VRAM Status
7B FP16 98% 291ms 15.4 GB Getestet
7B INT4 98% 336ms 5.5 GB Getestet
3B FP16 98% 264ms 7.13 GB PRODUCTION
3B INT4 - - - CUDA 13.1 nicht supported

Empfehlung: 3B FP16 für Jetson Deployment


Verzeichnisstruktur

/workspace/cotton-disease-detection/

  • data/raw/Cotton Disease/train/ - Training Daten
  • data/raw/Cotton Disease/val/ - Validation/Test Daten
  • scripts/train_qwen25vl_lora_v2.py - 7B Training Script
  • scripts/train_qwen25vl_3b_simple.py - 3B Training Script (EMPFOHLEN)
  • scripts/test_3b_fp16.py - 3B FP16 Test Script
  • outputs/qwen25vl_merged_v2/ - 7B Merged Model (~16GB)
  • outputs/qwen25vl_3b_merged/ - 3B Merged Model (~7GB) PRODUCTION
  • outputs/qwen25vl_q4_k_m.gguf - 7B GGUF INT4 (4.46 GB)

Training Commands

7B Model:

python3 /workspace/cotton-disease-detection/scripts/train_qwen25vl_lora_v2.py

  • Base: Qwen/Qwen2.5-VL-7B-Instruct
  • Time: ~55 min (A100)
  • Batch: 4

3B Model (Empfohlen):

python3 /workspace/cotton-disease-detection/scripts/train_qwen25vl_3b_simple.py

  • Base: Qwen/Qwen2.5-VL-3B-Instruct
  • Time: ~157 min (A100)
  • Batch: 8

Testing

python3 /workspace/cotton-disease-detection/scripts/test_3b_fp16.py


Jetson Deployment

Kopieren auf Jetson Orin 64GB:

  • 3B FP16: outputs/qwen25vl_3b_merged/ (~7GB)
  • Alternative: outputs/qwen25vl_q4_k_m.gguf (4.46GB für llama.cpp)

Erwartete Jetson Performance:

  • 3B FP16 PyTorch: ~400-500ms
  • 3B FP16 TensorRT: ~100-150ms (geschätzt)

LoRA Config

r=16, alpha=32, dropout=0.05 targets: q_proj, k_proj, v_proj, o_proj, gate_proj, up_proj, down_proj


Changelog

2025-01-29: 3B Model (98% Accuracy, 264ms, 7.13GB) 2025-01-29: 7B Model v2 (98% Accuracy)

Downloads last month
10
Safetensors
Model size
4B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support