gemma-4-E4B-it-Japanese-awy : GGUF

This model was finetuned and converted to GGUF format using Unsloth.

Example usage:

  • For text only LLMs: llama-cli -hf Aikimi/gemma-4-E4B-it-Japanese-awy --jinja
  • For multimodal models: llama-mtmd-cli -hf Aikimi/gemma-4-E4B-it-Japanese-awy --jinja

Available Model files:

  • gemma-4-E4B-it.Q4_K_M.gguf
  • gemma-4-E4B-it.Q8_0.gguf
  • gemma-4-E4B-it.F16.gguf
  • gemma-4-E4B-it.BF16-mmproj.gguf

This was trained 2x faster with Unsloth

Downloads last month
299
GGUF
Model size
8B params
Architecture
gemma4
Hardware compatibility
Log In to add your hardware

4-bit

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Aikimi/gemma-4-E4B-it-Japanese-awy

Quantized
(7)
this model