EraX-Translator-V1.0-mlx-bf16
MLX-VLM bfloat16 conversion of erax-ai/EraX-Translator-V1.0 for Apple Silicon.
Notes
- Converted locally with
mlx-vlm 0.4.0. - Source model architecture:
Gemma3ForConditionalGeneration. - This checkpoint is intended for translation tasks and was tested here on Vietnamese translation.
- Local smoke test passed with
mlx-vlmtext generation. - Local evaluation was run on 5 Vietnamese translation cases covering English, German, modern Chinese, and Classical Chinese.
Conversion
python3 -m mlx_vlm convert \
--hf-path erax-ai/EraX-Translator-V1.0 \
--mlx-path /path/to/EraX-Translator-V1.0-mlx-bf16 \
--dtype bfloat16
Quick Start
python3 - <<'PY'
from mlx_vlm import load
from mlx_vlm.generate import generate
model, processor = load('/path/to/EraX-Translator-V1.0-mlx-bf16')
messages = [
{"role": "system", "content": "Bạn là trợ lý dịch thuật nhiều ngôn ngữ. Chỉ trả về bản dịch chính xác, không giải thích thêm."},
{"role": "user", "content": "The weather is nice today, but the traffic is terrible.\n\nDịch sang tiếng Việt."},
]
prompt = processor.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
result = generate(model, processor, prompt, verbose=False, max_tokens=128, temperature=0.2, top_p=0.95, top_k=64)
print(result.text)
PY
Validation Summary
- Case count:
5 - Avg generation speed:
37.68 tok/s - Avg wall time:
4.62s - Max peak memory:
10.42 GB - Avg similarity to reference set:
0.2828 - Preamble leakage:
0cases
Translation Quality
Observed locally:
- Strongest on everyday English and Chinese to Vietnamese translation.
- More stable than the 8bit variant on German prose and Classical Chinese Buddhist text.
- Better choice when name fidelity and wording stability matter more than speed.
Caution
This model is a translation-tuned checkpoint. It is not intended as a general-purpose coding or math model, and difficult literary or historical material may still require human review.
- Downloads last month
- 14
Model size
5B params
Tensor type
BF16
·
Hardware compatibility
Log In to add your hardware
Quantized
Model tree for vanch007/EraX-Translator-V1.0-mlx-bf16
Base model
google/gemma-3-4b-pt Finetuned
google/gemma-3-4b-it Finetuned
erax-ai/EraX-Translator-V1.0