Automatic Speech Recognition
Transformers
PyTorch
TensorBoard
whisper
Generated from Trainer
Eval Results (legacy)
Instructions to use bgstud/whisper-small-libirClean-vs-commonNative-en with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use bgstud/whisper-small-libirClean-vs-commonNative-en with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("automatic-speech-recognition", model="bgstud/whisper-small-libirClean-vs-commonNative-en")# Load model directly from transformers import AutoProcessor, AutoModelForSpeechSeq2Seq processor = AutoProcessor.from_pretrained("bgstud/whisper-small-libirClean-vs-commonNative-en") model = AutoModelForSpeechSeq2Seq.from_pretrained("bgstud/whisper-small-libirClean-vs-commonNative-en") - Notebooks
- Google Colab
- Kaggle
End of training
Browse files
runs/Dec01_05-08-06_f7506233863c/events.out.tfevents.1669871343.f7506233863c.77.0
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
-
size
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:e108706bdb86910f3f33e2bbf10573a2517afe666de505d69a83833f87ea88e4
|
| 3 |
+
size 6992
|