lilfugu-lora
LoRA adapter for lilfugu. Apply to Qwen3-ASR-1.7B-bf16 with adjustable scale.
Usage
from mlx_tune.stt import FastSTTModel
from mlx_lm.tuner.lora import LoRALinear
model, _ = FastSTTModel.from_pretrained("mlx-community/Qwen3-ASR-1.7B-bf16")
model.load_adapter("holotherapper/lilfugu-lora")
# Adjust scale (0.0-1.0). Higher = stronger term conversion.
for _, module in model.model.named_modules():
if isinstance(module, LoRALinear):
module.scale = 1.0
text = model.transcribe("audio.wav", language="ja")
Hardware compatibility
Log In to add your hardware
Quantized
Model tree for holotherapper/lilfugu-lora
Base model
Qwen/Qwen3-ASR-1.7B