whisper-large-v3-it-multi-ct2-int8
CTranslate2 INT8 quantized version of LocalAI-io/whisper-large-v3-it-multi for fast CPU inference.
Author: Ettore Di Giacinto
Usage with LocalAI
This model is ready to use with LocalAI via the whisperx backend.
Save the following as whisperx-large-v3-it-multi.yaml in your LocalAI models directory:
name: whisperx-large-v3-it-multi
backend: whisperx
known_usecases:
- transcript
parameters:
model: LocalAI-io/whisper-large-v3-it-multi-ct2-int8
language: it
Then transcribe audio via the OpenAI-compatible endpoint:
curl http://localhost:8080/v1/audio/transcriptions \
-H "Content-Type: multipart/form-data" \
-F file="@audio.mp3" \
-F model="whisperx-large-v3-it-multi"
Usage
from faster_whisper import WhisperModel
model = WhisperModel("LocalAI-io/whisper-large-v3-it-multi-ct2-int8", device="cpu", compute_type="int8")
segments, info = model.transcribe("audio.mp3", language="it")
Links
- Downloads last month
- 33
Model tree for LocalAI-io/whisper-large-v3-it-multi-ct2-int8
Base model
openai/whisper-large-v3