Instructions to use bgstud/whisper-tiny-libirClean-vs-commonNative with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use bgstud/whisper-tiny-libirClean-vs-commonNative with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("automatic-speech-recognition", model="bgstud/whisper-tiny-libirClean-vs-commonNative")# Load model directly from transformers import AutoProcessor, AutoModelForSpeechSeq2Seq processor = AutoProcessor.from_pretrained("bgstud/whisper-tiny-libirClean-vs-commonNative") model = AutoModelForSpeechSeq2Seq.from_pretrained("bgstud/whisper-tiny-libirClean-vs-commonNative") - Notebooks
- Google Colab
- Kaggle
Training in progress, step 500
Browse files
pytorch_model.bin
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
size 151097331
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:4ef68474661966fe96ca848fd528a755f3bd56623a7271e4fa46b38b8cd1952c
|
| 3 |
size 151097331
|
runs/Dec06_19-18-58_n-62-12-20/events.out.tfevents.1670350963.n-62-12-20.24551.0
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
-
size
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:bde597087dc5789f13c70acc3fb500942e4dd1a2ae49d9b7e5c991ea4cb984fd
|
| 3 |
+
size 9029
|