File size: 11,132 Bytes
e0fc64c
94a12ca
 
 
e0fc64c
28a3a0c
 
 
 
 
94a12ca
e0f8da5
94a12ca
28a3a0c
94a12ca
 
 
28a3a0c
 
 
94a12ca
 
 
 
 
 
 
28a3a0c
 
94a12ca
 
 
 
 
 
28a3a0c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
e0fc64c
 
28a3a0c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
e0f8da5
 
28a3a0c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
e0f8da5
28a3a0c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
e0f8da5
 
28a3a0c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
---
library_name: transformers
license: apache-2.0
base_model: microsoft/swinv2-base-patch4-window8-256
tags:
- image-classification
- medical-imaging
- thyroid
- ultrasound
- swinv2
- generated_from_trainer
- ml-intern
datasets:
- sadib2026/roi-dataset-tn5000
metrics:
- accuracy
- f1
- auc_roc
- sensitivity
- specificity
model-index:
- name: TN5000_model
  results:
  - task:
      name: Image Classification
      type: image-classification
    dataset:
      name: TN5000 ROI Dataset
      type: image-classification
    metrics:
    - name: Accuracy
      type: accuracy
      value: 0.872
    - name: F1
      type: f1
      value: 0.908
    - name: AUC-ROC
      type: auc_roc
      value: 0.937
    - name: Sensitivity
      type: sensitivity
      value: 0.875
    - name: Specificity
      type: specificity
      value: 0.865
---

# TN5000 Thyroid Ultrasound Classifier

**Fine-tuned SwinV2-Base for Benign vs Malignant Thyroid Nodule Classification**

[![Model](https://img.shields.io/badge/Model-HuggingFace-yellow)](https://huggingface.co/Johnyquest7/TN5000_model)
[![Demo](https://img.shields.io/badge/Demo-Gradio-green)](https://huggingface.co/spaces/Johnyquest7/tn5000-thyroid-demo)
[![Dataset](https://img.shields.io/badge/Dataset-Kaggle-blue)](https://www.kaggle.com/datasets/sadib2026/roi-dataset-tn5000)

---

## 📋 Table of Contents

1. [Overview](#overview)
2. [Model Architecture](#model-architecture)
3. [Dataset](#dataset)
4. [Training Methodology](#training-methodology)
5. [Results](#results)
6. [External Validation](#external-validation)
7. [How to Use](#how-to-use)
8. [Limitations & Disclaimers](#limitations--disclaimers)
9. [Citation](#citation)

---

## Overview

This model classifies thyroid ultrasound images as **benign** or **malignant**, designed to assist in the early detection of thyroid cancer. It was fine-tuned from Microsoft's SwinV2-Base vision transformer on the TN5000 ROI dataset from Kaggle.

**Key Design Decisions:**
- **Optimized for sensitivity** (87.5%) to minimize missed malignancies — critical in cancer screening
- **AUC-ROC of 0.94** indicates excellent discriminative ability
- **Focal loss with class weighting** handles the benign/malignant class imbalance
- **Early stopping** prevents overfitting on the small medical dataset

---

## Model Architecture

| Property | Value |
|----------|-------|
| **Base Model** | [microsoft/swinv2-base-patch4-window8-256](https://huggingface.co/microsoft/swinv2-base-patch4-window8-256) |
| **Architecture** | Swin Transformer V2 |
| **Parameters** | 86.9M |
| **Input Size** | 256 × 256 |
| **Patch Size** | 4 × 4 |
| **Window Size** | 8 × 8 |
| **Number of Classes** | 2 (benign, malignant) |
| **License** | Apache 2.0 |

**Why SwinV2?**
Swin Transformers use hierarchical feature maps and shifted window attention, making them particularly effective for medical imaging where local texture patterns (echogenicity, microcalcifications, irregular margins) are diagnostically important. SwinV2 improves training stability with a cosine attention mechanism and larger model capacity.

---

## Dataset

### Primary Training Dataset: TN5000 ROI

- **Source:** [Kaggle - ROI Dataset TN5000](https://www.kaggle.com/datasets/sadib2026/roi-dataset-tn5000)
- **Type:** Thyroid ultrasound Region-of-Interest (ROI) patches
- **Total Images:** 4,250

| Split | Images | Benign | Malignant |
|-------|--------|--------|-----------|
| Train (80%) | 2,800 | ~1,600 | ~1,200 |
| Validation (20%) | 700 | ~400 | ~300 |
| Test (held-out) | 750 | ~400 | ~350 |

**Class Distribution:** The dataset is moderately imbalanced with more benign cases. We used **balanced class weights** (benign: 1.75, malignant: 0.70) and **focal loss** (γ=2.0) to prioritize malignant case detection.

### External Validation Dataset

- **Source:** [Johnyquest7/thyroid-cancer-classification-ultrasound-dataset](https://huggingface.co/datasets/Johnyquest7/thyroid-cancer-classification-ultrasound-dataset)
- **Images:** 3,115 total (train + test splits)
- **Purpose:** Independent validation on unseen data from a different source

---

## Training Methodology

### Data Preprocessing

| Transform | Training | Validation/Test |
|-----------|----------|-----------------|
| Resize | RandomResizedCrop(256) | Resize(256) + CenterCrop(256) |
| Horizontal Flip | 50% probability | No |
| Rotation | ±10° | No |
| Color Jitter | brightness=0.2, contrast=0.2 | No |
| Normalization | ImageNet mean/std | ImageNet mean/std |

### Training Configuration

```python
learning_rate: 2e-5
batch_size: 16 (per device)
gradient_accumulation_steps: 2
effective_batch_size: 32
epochs: 30 (early stopping patience: 5)
warmup_ratio: 0.1
optimizer: AdamW (β1=0.9, β2=0.999)
scheduler: Linear with warmup
mixed_precision: bf16
seed: 42
```

### Loss Function: Focal Loss

Standard cross-entropy treats all misclassifications equally. In thyroid screening, **missing a malignant case (false negative) is far more costly** than a false alarm. We used focal loss:

```
FL(pt) = −(1 − pt)^γ · log(pt)
```

With γ=2.0, the model focuses learning on hard-to-classify malignant cases. Class weights further upweight the minority malignant class.

### Model Selection Criterion

The best model was selected by **validation AUC-ROC** (not accuracy), ensuring optimal discrimination between benign and malignant cases across all thresholds.

---

## Results

### Validation Set (700 images)

| Metric | Value |
|--------|-------|
| **Accuracy** | 87.9% |
| **F1-Score** | 91.3% |
| **Sensitivity (Recall)** | 88.8% |
| **Specificity** | 85.5% |
| **PPV (Precision)** | 93.9% |
| **NPV** | 75.3% |
| **AUC-ROC** | **0.940** |

**Confusion Matrix:**
```
              Predicted
           Benign  Malignant
Actual Benign    171       29
      Malignant   56      444
```

### Test Set (750 images — held out)

| Metric | Value |
|--------|-------|
| **Accuracy** | 87.2% |
| **F1-Score** | 90.8% |
| **Sensitivity (Recall)** | 87.5% |
| **Specificity** | 86.5% |
| **PPV (Precision)** | 94.4% |
| **NPV** | 72.6% |
| **AUC-ROC** | **0.937** |

**Confusion Matrix:**
```
              Predicted
           Benign  Malignant
Actual Benign    180       28
      Malignant   68      474
```

### Training Curves

The model converged around epoch 18-22 with validation AUC-ROC peaking at 0.940. Early stopping triggered at epoch 27, loading the best checkpoint.

| Epoch | Train Loss | Val AUC-ROC | Val Sensitivity | Val Specificity |
|-------|-----------|-------------|-----------------|-----------------|
| 1 | 0.356 | 0.713 | 0.714 | 0.590 |
| 5 | 0.229 | 0.912 | 0.940 | 0.715 |
| 10 | 0.187 | 0.922 | 0.858 | 0.835 |
| 15 | 0.148 | 0.934 | 0.928 | 0.805 |
| 18 | 0.125 | **0.939** | 0.846 | 0.885 |
| 22 | 0.143 | **0.940** | 0.888 | 0.855 |

---

## External Validation

To assess generalization, we tested the model on an independent dataset without any fine-tuning:

| Metric | Value |
|--------|-------|
| **Accuracy** | 66.8% |
| **F1-Score** | 44.7% |
| **Sensitivity** | 34.5% |
| **Specificity** | 87.4% |
| **PPV** | 63.5% |
| **NPV** | 67.7% |
| **AUC-ROC** | **0.707** |

**Confusion Matrix (External):**
```
              Predicted
           Benign  Malignant
Actual Benign   1665      240
      Malignant  793      417
```

**Analysis:** The external validation shows a significant performance drop (AUC 0.94 → 0.71), which is expected due to:
1. **Domain shift:** Different ultrasound machines, protocols, and image preprocessing
2. **Different ROI extraction:** The external dataset may use different cropping strategies
3. **Population differences:** Different patient demographics and disease prevalence

This highlights the importance of **domain adaptation** or **fine-tuning on local data** before clinical deployment.

---

## How to Use

### Quick Inference with Pipeline

```python
from transformers import pipeline
from PIL import Image

# Load model
classifier = pipeline("image-classification", model="Johnyquest7/TN5000_model")

# Predict
image = Image.open("thyroid_ultrasound.png").convert("RGB")
results = classifier(image)

# Results format:
# [{'label': 'malignant', 'score': 0.944}, {'label': 'benign', 'score': 0.056}]
```

### Manual Inference

```python
import torch
from PIL import Image
from transformers import AutoImageProcessor, AutoModelForImageClassification

# Load model and processor
model = AutoModelForImageClassification.from_pretrained("Johnyquest7/TN5000_model")
processor = AutoImageProcessor.from_pretrained("Johnyquest7/TN5000_model")

# Preprocess
image = Image.open("thyroid_ultrasound.png").convert("RGB")
inputs = processor(image, return_tensors="pt")

# Predict
with torch.no_grad():
    logits = model(**inputs).logits
    probs = torch.softmax(logits, dim=-1)[0]

# Get probabilities
malignant_prob = probs[1].item()
benign_prob = probs[0].item()

print(f"Malignant: {malignant_prob:.1%}")
print(f"Benign: {benign_prob:.1%}")
```

### Gradio Demo

Try the live demo: [🩺 Thyroid Nodule Classifier Demo](https://huggingface.co/spaces/Johnyquest7/tn5000-thyroid-demo)

---

## Limitations & Disclaimers

⚠️ **CRITICAL: This model is for research and educational purposes only.**

1. **Not FDA-approved** for clinical use
2. **External validation showed performance degradation** (AUC 0.71 vs 0.94) — domain shift is a real concern
3. **Trained on ROI patches**, not full ultrasound images — the model expects pre-cropped nodule regions
4. **Class imbalance** in training data may bias predictions
5. **No multi-institutional validation** — performance may vary across hospitals and equipment
6. **Always consult a radiologist or endocrinologist** for diagnosis

**Intended Use Cases:**
- Research on AI-assisted thyroid screening
- Educational tool for medical students
- Prototype for integration into PACS systems (with proper validation)

**Not Intended For:**
- Direct patient diagnosis
- Replacing human radiologists
- Screening without supervision

---

## Citation

If you use this model in your research, please cite:

```bibtex
@misc{tn5000_model,
  title={TN5000 Thyroid Ultrasound Classifier},
  author={Johnyquest7},
  year={2026},
  howpublished={\url{https://huggingface.co/Johnyquest7/TN5000_model}},
  note={Fine-tuned SwinV2-Base for benign vs malignant thyroid nodule classification}
}
```

**Base Model:**
```bibtex
@article{liu2022swinv2,
  title={Swin Transformer V2: Scaling Up Capacity and Resolution},
  author={Liu, Ze and Hu, Han and Lin, Yutong and Yao, Zhuliang and Xie, Zhenda and Wei, Yixuan and Ning, Jia and Cao, Yue and Zhang, Zheng and Dong, Li and Wei, Furu and Guo, Baining},
  journal={International Conference on Computer Vision (ICCV)},
  year={2021}
}
```

**Dataset:**
- TN5000 ROI Dataset: [Kaggle](https://www.kaggle.com/datasets/sadib2026/roi-dataset-tn5000)

---

## Acknowledgments

- Model trained using Hugging Face Transformers and Datasets libraries
- Compute provided by Hugging Face GPU credits
- Base model: Microsoft SwinV2-Base

---

*Generated by ML Intern — an agent for machine learning research and development on the Hugging Face Hub.*