CAT-Translate-1.4b MLX q4
This repository provides MLX quantized weights (q4) converted from the original model.
Original model: cyberagent/CAT-Translate-1.4b
Quantization: MLX q4 (4-bit).
- Downloads last month
- 5
Model size
0.2B params
Tensor type
BF16
·
U32 ·
Hardware compatibility
Log In to add your hardware
4-bit
Model tree for hotchpotch/CAT-Translate-1.4b-mlx-q4
Base model
sbintuitions/sarashina2.2-1b Finetuned
cyberagent/CAT-Translate-1.4b