lilfugu-8bit

8-bit quantized version of lilfugu. See the main model card for details.

2.8 GB โ€” smallest variant for Apple Silicon.

Usage

pip install -U mlx-audio
from mlx_audio.stt import load

model = load("holotherapper/lilfugu-8bit")
result = model.generate("audio.wav", language="Japanese")
print(result.text)
Downloads last month
44
MLX
Hardware compatibility
Log In to add your hardware

Quantized

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for holotherapper/lilfugu-8bit

Finetuned
(32)
this model