mlx-community/granite-4.0-1b-speech-mxfp4

This model was converted to MLX format from ibm-granite/granite-4.0-1b-speech using mlx-audio version 0.4.0.

Refer to the original model card for more details on the model.

Use with mlx-audio

pip install -U mlx-audio

CLI Example:

python -m mlx_audio.stt.generate --model mlx-community/granite-4.0-1b-speech-mxfp4 --audio "audio.wav"

Python Example:

from mlx_audio.stt.utils import load_model
from mlx_audio.stt.generate import generate_transcription

model = load_model("mlx-community/granite-4.0-1b-speech-mxfp4")
transcription = generate_transcription(
    model=model,
    audio_path="path_to_audio.wav",
    output_path="path_to_output.txt",
    format="txt",
    verbose=True,
)
print(transcription.text)
Downloads last month
113
Safetensors
Model size
0.8B params
Tensor type
BF16
U8
U32
MLX
Hardware compatibility
Log In to add your hardware

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 馃檵 Ask for provider support

Model tree for mlx-community/granite-4.0-1b-speech-mxfp4

Quantized
(15)
this model