Create README.md
Browse files
README.md
ADDED
|
@@ -0,0 +1,47 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
cat > README.md << 'README'
|
| 2 |
+
---
|
| 3 |
+
language: en
|
| 4 |
+
license: apache-2.0
|
| 5 |
+
tags:
|
| 6 |
+
- image-classification
|
| 7 |
+
- medical
|
| 8 |
+
- dermatology
|
| 9 |
+
- skin-disease
|
| 10 |
+
- ensemble
|
| 11 |
+
datasets:
|
| 12 |
+
- merolavtechnology/dermnet-skin40-cleaned-dataset
|
| 13 |
+
metrics:
|
| 14 |
+
- accuracy
|
| 15 |
+
- f1
|
| 16 |
+
---
|
| 17 |
+
|
| 18 |
+
# DermNet-Skin23 — ConvNeXt-V1-XL @ 384
|
| 19 |
+
|
| 20 |
+
ConvNeXt-V1-XL fine-tuned on a 23-class consolidation of DermNet + Skin40, paired with
|
| 21 |
+
[iamcode6/dermnet-skin23-eva02](https://huggingface.co/iamcode6/dermnet-skin23-eva02) for cross-architecture ensembling.
|
| 22 |
+
|
| 23 |
+
## Results
|
| 24 |
+
|
| 25 |
+
Single best (EMA): 80.47% acc / 0.7843 macro F1.
|
| 26 |
+
|
| 27 |
+
5-model cross-architecture ensemble (2× EVA-02-L + 3× ConvNeXt-V1-XL) with 4-aug TTA: **82.86% acc / 0.8113 macro F1**.
|
| 28 |
+
|
| 29 |
+
## Dataset
|
| 30 |
+
|
| 31 |
+
Source: merolavtechnology/dermnet-skin40-cleaned-dataset on Kaggle. The 40 fine-grained Skin40 categories were consolidated into 23 broader Dermnet buckets.
|
| 32 |
+
Final: 17,557 train / 3,856 test.
|
| 33 |
+
|
| 34 |
+
## Training
|
| 35 |
+
|
| 36 |
+
- Hardware: AMD Instinct MI300X (192 GB HBM3), ROCm 7.0
|
| 37 |
+
- Backbone: convnext_xlarge.fb_in22k_ft_in1k_384 (~350M params)
|
| 38 |
+
- 25 epochs, batch 64, AdamW, cosine LR with 10% warmup, peak LR=1.1e-4
|
| 39 |
+
- Mixup α=0.1 + Cutmix α=0.5 at prob=0.5; off in last 20% of epochs
|
| 40 |
+
- WeightedRandomSampler with effective-number weights
|
| 41 |
+
- EMA decay=0.999, SWA over last 20%, bf16 autocast
|
| 42 |
+
|
| 43 |
+
## Notes
|
| 44 |
+
|
| 45 |
+
ConvNeXt V2-Huge was tried first but is bf16-unstable on long runs (GRN issue) — V1-XL is the reliable choice. EMA decay of 0.9999 was too slow for a 25-epoch
|
| 46 |
+
fine-tune from a fresh head; 0.999 fixes it.
|
| 47 |
+
README
|