Siddanna/transparent-tube-dataset
Viewer β’ Updated β’ 1k β’ 53
How to use Siddanna/transparent-tube-classifier with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("image-classification", model="Siddanna/transparent-tube-classifier")
pipe("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hub/parrots.png") # Load model directly
from transformers import AutoImageProcessor, AutoModelForImageClassification
processor = AutoImageProcessor.from_pretrained("Siddanna/transparent-tube-classifier")
model = AutoModelForImageClassification.from_pretrained("Siddanna/transparent-tube-classifier")A binary image classifier that distinguishes between:
| Property | Value |
|---|---|
| Base Model | facebook/dinov2-base (ViT-B/14, 86.6M params) |
| Training Method | Linear probe (frozen backbone + trained classifier head) |
| Training Dataset | Siddanna/transparent-tube-dataset |
| Accuracy | 100% on test set |
| Loss | 0.0014 |
| Image Size | 256Γ256 (DINOv2 default) |
| License | Apache 2.0 |
from transformers import pipeline
classifier = pipeline("image-classification", model="Siddanna/transparent-tube-classifier")
result = classifier("your_tube_image.jpg")
print(result)
# [{'label': 'transparent_with_blue', 'score': 0.99}, {'label': 'transparent_alone', 'score': 0.01}]
from transformers import AutoImageProcessor, AutoModelForImageClassification
from PIL import Image
import torch
# Load model and processor
model = AutoModelForImageClassification.from_pretrained("Siddanna/transparent-tube-classifier")
processor = AutoImageProcessor.from_pretrained("Siddanna/transparent-tube-classifier")
# Load and classify image
image = Image.open("your_tube_image.jpg")
inputs = processor(image, return_tensors="pt")
with torch.no_grad():
logits = model(**inputs).logits
predicted_class = logits.argmax(-1).item()
label = model.config.id2label[predicted_class]
confidence = torch.softmax(logits, dim=-1)[0][predicted_class].item()
print(f"Prediction: {label} (confidence: {confidence:.2%})")
1e-3 (with cosine schedule)| Epoch | Train Loss | Eval Loss | Eval Accuracy |
|---|---|---|---|
| 1 | 0.032 | 0.019 | 100% |
| 2 | 0.011 | 0.002 | 100% |
| 3 | 0.002 | 0.001 | 100% |
| 4 | 0.004 | 0.010 | 99.5% |
The model is currently trained on synthetic data. For best results with your actual tubes:
Take 50-100+ photos per class of your actual tubes:
data/
βββ train/
β βββ transparent_alone/ # Photos of transparent tube alone
β βββ transparent_with_blue/ # Photos of transparent + blue tube
βββ test/
βββ transparent_alone/
βββ transparent_with_blue/
# Clone the training script
# Option A: Linear probe (fast, good with 50+ images/class)
python train.py --data_dir ./data --freeze_backbone --hub_model_id your-username/tube-classifier
# Option B: Full fine-tune (better with 200+ images/class)
python train.py --data_dir ./data --learning_rate 5e-5 --hub_model_id your-username/tube-classifier
Try the model: Transparent Tube Classifier Demo
@misc{transparent-tube-classifier,
title={Transparent Tube Classifier},
author={Siddanna},
year={2024},
publisher={Hugging Face},
url={https://huggingface.co/Siddanna/transparent-tube-classifier}
}
Base model
facebook/dinov2-base