Whisper Small Pilotgpt Unified All Raw Nopack 0.5S Clean 7283
Fine-tuned Whisper model based on openai/whisper-small.
Training Results
| Metric | Base Model | Fine-tuned |
|---|---|---|
| WER | 53.69% | 32.92% |
Improvement: 20.77% WER reduction (lower is better)
Training Details
- Base Model: openai/whisper-small
- Training Dataset: Trelis/pilotgpt-unified-all-raw-nopack-0.5s-clean
- Train Loss: 0.5921
- Training Time: 16.2 minutes
Inference
from transformers import pipeline
asr = pipeline("automatic-speech-recognition", model="Trelis/whisper-small-pilotgpt-unified-all-raw-nopack-0.5s-clean-7283")
result = asr("path/to/audio.wav")
print(result["text"])
Training Logs
Full training logs are available in training_log.txt.
Fine-tuned using Trelis Studio
- Downloads last month
- -
Model tree for Trelis/whisper-small-pilotgpt-unified-all-raw-nopack-0.5s-clean-7283
Base model
openai/whisper-small