Instructions to use bgstud/whisper-tiny-libirClean-vs-commonNative with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use bgstud/whisper-tiny-libirClean-vs-commonNative with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("automatic-speech-recognition", model="bgstud/whisper-tiny-libirClean-vs-commonNative")# Load model directly from transformers import AutoProcessor, AutoModelForSpeechSeq2Seq processor = AutoProcessor.from_pretrained("bgstud/whisper-tiny-libirClean-vs-commonNative") model = AutoModelForSpeechSeq2Seq.from_pretrained("bgstud/whisper-tiny-libirClean-vs-commonNative") - Notebooks
- Google Colab
- Kaggle
Training in progress, step 750
Browse files
pytorch_model.bin
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
size 151097331
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:2e1683e9592ae54e925042728edeb6022ccbabe3ce233b0cdaa676fd62cd8de9
|
| 3 |
size 151097331
|
runs/Dec06_10-14-56_7bf88051a669/events.out.tfevents.1670322041.7bf88051a669.74.0
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
-
size
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:fa24a65f2b10fd59c28dab836b8bca0efc35b2d673e6b713b0ee28e6621b3412
|
| 3 |
+
size 11406
|