gemma-4-coder-gguf : GGUF

🧠 Gemma-4 E4B Code Model

Model ID: jgebbeken/gemma-4-coder-e4b Base Model: Gemma 4 Type: Instruction-tuned code model Quantization: E4B (4-bit efficient format)

πŸ“Œ Overview

Gemma-4 E4B is an instruction-tuned coding model optimized for:

πŸ’» Code generation πŸ› οΈ Debugging assistance 🧩 Code completion πŸ“š Developer Q&A

This is my very first released AI model! πŸŽ‰ It's an adapter that built on top of Google's Gemma 4, with my own custom modifications. I'm excited to share it with the community and welcome any feedback or contributions in making making this small model better.

The model follows a MagicCoder-style instruction tuning approach, enabling strong performance on structured programming tasks and natural language β†’ code generation.

βš™οΈ Training Details Base Model: Gemma 4 Fine-tuning: Instruction tuning Quantization: E4B (efficient 4-bit) Primary Dataset: Magicoder-Evol-Instruct-110K

✨ Capabilities Instruction-following for programming tasks Multi-language support (Python, Swift, JavaScript, etc.) Efficient inference due to quantization Suitable for local inference and agent workflows ⚠️ Limitations May generate incorrect or non-compilable code Limited reasoning compared to larger models May reflect biases or artifacts from synthetic data Should not be used in production without validation πŸ“œ License Model License: Apache License 2.0

This model is released under the Apache License 2.0.

Copyright 2026 Josh Gebbeken

This work includes parts of the Gemma 4 AI model family, copyright Β© 2026 Google LLC.

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this work except in compliance with the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

This model was finetuned and converted to GGUF format using Unsloth.

Example usage:

  • For text only LLMs: llama-cli -hf jgebbeken/gemma-4-coder-gguf --jinja
  • For multimodal models: llama-mtmd-cli -hf jgebbeken/gemma-4-coder-gguf --jinja

Available Model files:

  • gemma-4-E4b-it.Q4_K_M.gguf
  • gemma-4-E4b-it.BF16-mmproj.gguf This was trained 2x faster with Unsloth
Downloads last month
8,778
GGUF
Model size
8B params
Architecture
gemma4
Hardware compatibility
Log In to add your hardware

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for jgebbeken/gemma-4-coder-gguf

Quantized
(97)
this model