YOLOv8n Handwritten Japanese Ingredients Detection
This model is a fine-tuned version of YOLOv8n specifically trained to detect handwritten Japanese ingredient regions (text blocks) from images.
Model Description
- Task: Object Detection
- Base Model: YOLOv8n (nano)
- Target Class:
text(handwritten text regions) - Category: OCR Pre-processing / Vision
Intended Use
This model is designed to be the "vision" component of a mobile application that calculates nutritional information from handwritten ingredient lists. It identifies text regions before they are passed to an OCR engine.
Training Data & Environment
- Dataset: 30 high-quality handwritten images (20 Train / 5 Val / 5 Test).
- Hardward: MacBook Air (M4) using MPS (Metal Performance Shaders).
- Training Epochs: 100
Performance
Fine-tuned on 20 images, the model achieved a significant improvement compared to the baseline stock YOLOv8 model.
| Metric | Value |
|---|---|
| mAP50 | 0.978 |
| Precision | 0.966 |
| Recall | 0.981 |
Usage
You can use this model with the ultralytics library:
from ultralytics import YOLO
# Load the model
model = YOLO("satoyutaka/yolov8n-handwritten-japanese-ingredients")
# Run inference
results = model.predict("path/to/your/image.jpg")
# Show results
results[0].show()
Created by satoyutaka.
- Downloads last month
- 22
Evaluation results
- mAP50 on Handwritten Japanese Ingredients List (Private)self-reported0.978