Gemma MLX Studio
GGUF exports for the best_4b_v2_stronger run produced with Gemma MLX Studio.
Files
best_4b_v2_stronger-Q4_K_M.gguf- recommended for practical LM Studio / llama.cpp use
best_4b_v2_stronger-f16.gguf- high precision archive / conversion source
Notes
- Base family:
Gemma 4 E4B IT - Fine-tuning workflow: local MLX + LoRA, then fused export
- Primary language: Korean
Suggested LM Studio settings
- Temperature:
0.2 - Top P:
0.9 - Top K:
40 - Repetition Penalty:
1.1
Prompt style
This model tends to work best with:
- short and structured instructions
- explicit output length constraints
- proper noun preservation
Example:
First Fire Horizon와 Nightglass Relay의 차이를 3문장으로 설명해줘.
First Fire Horizon에는 튜토리얼, 기본 보급, 첫 항로 개방만 연결하고,
Nightglass Relay에는 신호, 기록, 중계만 연결해.
고유명사는 번역하지 마.
- Downloads last month
- -
Hardware compatibility
Log In to add your hardware
4-bit
16-bit
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support
Model tree for sangwon1472/Gemma-MLX-Studio
Base model
google/gemma-4-E4B-it