Whisper-Tiny-ExecuTorch-Metal

Pre-exported ExecuTorch .pte files for Whisper Tiny with Metal backend (Apple GPU). Speech-to-text with GPU acceleration on macOS Apple Silicon.

For the XNNPACK (CPU) variant, see Whisper-Tiny-ExecuTorch-XNNPACK.

Installation

git clone https://github.com/pytorch/executorch/ ~/executorch
cd ~/executorch && EXECUTORCH_BUILD_KERNELS_TORCHAO=1 TORCHAO_BUILD_EXPERIMENTAL_MPS=1 ./install_executorch.sh
make whisper-metal

Download

pip install huggingface_hub
huggingface-cli download younghan-meta/Whisper-Tiny-ExecuTorch-Metal --local-dir ~/whisper_tiny_metal

Run

DYLD_LIBRARY_PATH=/usr/lib:$(brew --prefix libomp)/lib \
  cmake-out/examples/models/whisper/whisper_runner \
    --model_path ~/whisper_tiny_metal/model.pte \
    --tokenizer_path ~/whisper_tiny_metal/ \
    --processor_path ~/whisper_tiny_metal/whisper_preprocessor.pte \
    --audio_path ~/whisper_tiny_metal/poem.wav \
    --temperature 0

Export Commands

pip install optimum-executorch

# Model (fp32 -- bfloat16 has a Metal shader compilation issue)
optimum-cli export executorch \
    --model openai/whisper-tiny \
    --task automatic-speech-recognition \
    --recipe metal \
    --output_dir ./whisper_tiny_metal

# Preprocessor
python -m executorch.extension.audio.mel_spectrogram \
    --feature_size 80 --stack_output --max_audio_len 300 \
    --output_file ./whisper_tiny_metal/whisper_preprocessor.pte

More Info

Downloads last month
14
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for younghan-meta/Whisper-Tiny-ExecuTorch-Metal

Finetuned
(1802)
this model