whisper-large-v3-it-ct2-int8

CTranslate2 INT8 quantized version of LocalAI-io/whisper-large-v3-it for fast CPU inference.

Author: Ettore Di Giacinto

Usage with LocalAI

This model is ready to use with LocalAI via the whisperx backend.

Save the following as whisperx-large-v3-it.yaml in your LocalAI models directory:

name: whisperx-large-v3-it
backend: whisperx
known_usecases:
  - transcript
parameters:
  model: LocalAI-io/whisper-large-v3-it-ct2-int8
  language: it

Then transcribe audio via the OpenAI-compatible endpoint:

curl http://localhost:8080/v1/audio/transcriptions \
  -H "Content-Type: multipart/form-data" \
  -F file="@audio.mp3" \
  -F model="whisperx-large-v3-it"

Usage

from faster_whisper import WhisperModel
model = WhisperModel("LocalAI-io/whisper-large-v3-it-ct2-int8", device="cpu", compute_type="int8")
segments, info = model.transcribe("audio.mp3", language="it")

Links

Downloads last month
29
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for LocalAI-io/whisper-large-v3-it-ct2-int8

Finetuned
(814)
this model