Whisper Small Pilotgpt Unified All Data Lowercase Data Prep 6772
Fine-tuned Whisper model based on openai/whisper-small.
Training Results
| Metric | Base Model | Fine-tuned |
|---|---|---|
| WER | 53.69% | 27.54% |
Improvement: 26.15% WER reduction (lower is better)
Training Details
- Base Model: openai/whisper-small
- Training Dataset: Trelis/pilotgpt-unified-all-data-lowercase-data-prep
- Train Loss: 0.6250
- Training Time: 5.2 minutes
Inference
from transformers import pipeline
asr = pipeline("automatic-speech-recognition", model="Trelis/whisper-small-pilotgpt-unified-all-data-lowercase-data-prep-6772")
result = asr("path/to/audio.wav")
print(result["text"])
Training Logs
Full training logs are available in training_log.txt.
Fine-tuned using Trelis Studio
- Downloads last month
- -
Model tree for Trelis/whisper-small-pilotgpt-unified-all-data-lowercase-data-prep-6772
Base model
openai/whisper-small