Text Classification
GGUF
English
gemma4
llama.cpp
unsloth
vision-language-model
gemma
deepseek
distill
conversational
How to use from
llama.cppInstall from WinGet (Windows)
winget install llama.cpp
# Start a local OpenAI-compatible server with a web UI:
llama-server -hf Alienstro/Gemma4-E2B-it-Deepseek-V4-8000x:# Run inference directly in the terminal:
llama-cli -hf Alienstro/Gemma4-E2B-it-Deepseek-V4-8000x:Use pre-built binary
# Download pre-built binary from:
# https://github.com/ggerganov/llama.cpp/releases# Start a local OpenAI-compatible server with a web UI:
./llama-server -hf Alienstro/Gemma4-E2B-it-Deepseek-V4-8000x:# Run inference directly in the terminal:
./llama-cli -hf Alienstro/Gemma4-E2B-it-Deepseek-V4-8000x:Build from source code
git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
cmake -B build
cmake --build build -j --target llama-server llama-cli# Start a local OpenAI-compatible server with a web UI:
./build/bin/llama-server -hf Alienstro/Gemma4-E2B-it-Deepseek-V4-8000x:# Run inference directly in the terminal:
./build/bin/llama-cli -hf Alienstro/Gemma4-E2B-it-Deepseek-V4-8000x:Use Docker
docker model run hf.co/Alienstro/Gemma4-E2B-it-Deepseek-V4-8000x:Quick Links
Gemma4-E2B-it-Deepseek-V4-8000x : GGUF
This model was finetuned and converted to GGUF format using Unsloth.
Achieved a training loss of 1.63
Parameters
- Epochs: 2
- Method: QLoRA
- Context length: 1024
- Learning Rate: 0.0002
LoRa Settings
- Rank: 16
- Alpha: 16
- Dropout: 0.00
- Target modules: All
- LoRA
Training Hyperparameters
- Optimizer: Paged AdamW 8-Bit
- LR scheduler: Linear
- Batch Size: 1
- Grad Accum: 32
- Weight Decay: 0.001
Example usage:
- For text only LLMs:
llama-cli -hf Alienstro/Gemma4-E2B-it-Deepseek-V4-8000x --jinja - For multimodal models:
llama-mtmd-cli -hf Alienstro/Gemma4-E2B-it-Deepseek-V4-8000x --jinja
Available Model files:
gemma-4-e2b-it.Q4_K_M.ggufgemma-4-e2b-it.BF16-mmproj.gguf
⚠️ Ollama Note for Vision Models
Important: Ollama currently does not support separate mmproj files for vision models.
To create an Ollama model from this vision model:
- Place the
Modelfilein the same directory as the finetuned bf16 merged model - Run:
ollama create model_name -f ./Modelfile(Replacemodel_namewith your desired name)
This will create a unified bf16 model that Ollama can use.
This was trained 2x faster with Unsloth

- Downloads last month
- 7,823
Hardware compatibility
Log In to add your hardware
4-bit
5-bit
6-bit
8-bit
16-bit
Install from brew
# Start a local OpenAI-compatible server with a web UI: llama-server -hf Alienstro/Gemma4-E2B-it-Deepseek-V4-8000x:# Run inference directly in the terminal: llama-cli -hf Alienstro/Gemma4-E2B-it-Deepseek-V4-8000x: