gemma-4-coder-gguf : GGUF
π§ Gemma-4 E4B Code Model
Model ID: jgebbeken/gemma-4-coder-e4b Base Model: Gemma 4 Type: Instruction-tuned code model Quantization: E4B (4-bit efficient format)
π Overview
Gemma-4 E4B is an instruction-tuned coding model optimized for:
π» Code generation π οΈ Debugging assistance π§© Code completion π Developer Q&A
This is my very first released AI model! π It's an adapter that built on top of Google's Gemma 4, with my own custom modifications. I'm excited to share it with the community and welcome any feedback or contributions in making making this small model better.
The model follows a MagicCoder-style instruction tuning approach, enabling strong performance on structured programming tasks and natural language β code generation.
βοΈ Training Details Base Model: Gemma 4 Fine-tuning: Instruction tuning Quantization: E4B (efficient 4-bit) Primary Dataset: Magicoder-Evol-Instruct-110K
β¨ Capabilities Instruction-following for programming tasks Multi-language support (Python, Swift, JavaScript, etc.) Efficient inference due to quantization Suitable for local inference and agent workflows β οΈ Limitations May generate incorrect or non-compilable code Limited reasoning compared to larger models May reflect biases or artifacts from synthetic data Should not be used in production without validation π License Model License: Apache License 2.0
This model is released under the Apache License 2.0.
Copyright 2026 Josh Gebbeken
This work includes parts of the Gemma 4 AI model family, copyright Β© 2026 Google LLC.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this work except in compliance with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
This model was finetuned and converted to GGUF format using Unsloth.
Example usage:
- For text only LLMs:
llama-cli -hf jgebbeken/gemma-4-coder-gguf --jinja - For multimodal models:
llama-mtmd-cli -hf jgebbeken/gemma-4-coder-gguf --jinja
Available Model files:
gemma-4-E4b-it.Q4_K_M.ggufgemma-4-E4b-it.BF16-mmproj.ggufThis was trained 2x faster with Unsloth
- Downloads last month
- 8,778
4-bit
Model tree for jgebbeken/gemma-4-coder-gguf
Base model
google/gemma-4-E4B-it