whisper-small-it-multi
Fine-tuned openai/whisper-small (244M params) for Italian ASR on multiple datasets.
Author: Ettore Di Giacinto
Brought to you by the LocalAI team. This model can be used directly with LocalAI.
Usage with LocalAI
This model is ready to use with LocalAI via the whisperx backend.
Save the following as whisperx-small-it-multi.yaml in your LocalAI models directory:
name: whisperx-small-it-multi
backend: whisperx
known_usecases:
- transcript
parameters:
model: LocalAI-io/whisper-small-it-multi-ct2-int8
language: it
Then transcribe audio via the OpenAI-compatible endpoint:
curl http://localhost:8080/v1/audio/transcriptions \
-H "Content-Type: multipart/form-data" \
-F file="@audio.mp3" \
-F model="whisperx-small-it-multi"
Results
Evaluated on combined test set (Common Voice + MLS + VoxPopuli):
| Step | WER |
|---|---|
| 1000 | 21.51% |
| 3000 | 18.30% |
| 5000 | 17.32% |
| 7000 | 16.21% |
| 10000 | 15.63% |
Training Details
- Base model: openai/whisper-small (244M parameters)
- Datasets: Common Voice 25.0 Italian (173k) + MLS Italian (60k) + VoxPopuli Italian (23k) = 255k train samples
- Steps: 10,000
- Precision: bf16 on NVIDIA GB10
Usage
Transformers
from transformers import pipeline
pipe = pipeline("automatic-speech-recognition", model="LocalAI-io/whisper-small-it-multi")
result = pipe("audio.mp3", generate_kwargs={"language": "it", "task": "transcribe"})
print(result["text"])
CTranslate2 / faster-whisper
For optimized CPU inference: LocalAI-io/whisper-small-it-multi-ct2-int8
Links
- CTranslate2 INT8: LocalAI-io/whisper-small-it-multi-ct2-int8
- Project: github.com/localai-org/italian-whisper
- LocalAI: github.com/mudler/LocalAI
- Downloads last month
- 14
Model tree for LocalAI-io/whisper-small-it-multi
Base model
openai/whisper-small