mlx & ggml
#2
by cstr - opened
icyi: mlx model: mlx-community/whisper-large-v3-turbo-german-f16
(made this with a custom script for converting safetensor whisper models. it is in float16, works well, quantized version: 4bit, float16
also, here is the ggml for use with whisper.cpp etc
cstr changed discussion title from mlx to mlx & ggml