whisper-medium-it-ct2-int8

CTranslate2 INT8 quantized version of LocalAI-io/whisper-medium-it for fast CPU inference.

Author: Ettore Di Giacinto

Brought to you by the LocalAI team.

Usage with LocalAI

This model is ready to use with LocalAI via the whisperx backend.

Save the following as whisperx-medium-it.yaml in your LocalAI models directory:

name: whisperx-medium-it
backend: whisperx
known_usecases:
  - transcript
parameters:
  model: LocalAI-io/whisper-medium-it-ct2-int8
  language: it

Then transcribe audio via the OpenAI-compatible endpoint:

curl http://localhost:8080/v1/audio/transcriptions \
  -H "Content-Type: multipart/form-data" \
  -F file="@audio.mp3" \
  -F model="whisperx-medium-it"

Usage

from faster_whisper import WhisperModel
model = WhisperModel("LocalAI-io/whisper-medium-it-ct2-int8", device="cpu", compute_type="int8")
segments, info = model.transcribe("audio.mp3", language="it")

Links

Downloads last month
22
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for LocalAI-io/whisper-medium-it-ct2-int8

Finetuned
(847)
this model