lilfugu-8bit
8-bit quantized version of lilfugu. See the main model card for details.
2.8 GB โ smallest variant for Apple Silicon.
Usage
pip install -U mlx-audio
from mlx_audio.stt import load
model = load("holotherapper/lilfugu-8bit")
result = model.generate("audio.wav", language="Japanese")
print(result.text)
- Downloads last month
- 44
Hardware compatibility
Log In to add your hardware
Quantized
Model tree for holotherapper/lilfugu-8bit
Base model
Qwen/Qwen3-ASR-1.7B