Gemma 4 E4B-it (Causal / Text-Only)

This is the text-only (causal LM) version of google/gemma-4-E4B-it, with vision encoder weights removed. Only the text decoder is retained.

License: Same as the original — see Gemma Terms of Use.

Serving with vLLM

python3 -m vllm.entrypoints.openai.api_server \
    --model /model \
    --served-model-name '$MODEL' \
    --tensor-parallel-size 1 \
    --dtype auto \
    --kv-cache-dtype fp8_e4m3 \
    --max-model-len 32768 \
    --gpu-memory-utilization '$GPU_MEM_UTIL' \
    --enforce-eager \
    --enable-chunked-prefill \
    --max-num-batched-tokens 8192 \
    --language-model-only \
    --enable-auto-tool-choice \
    --reasoning-parser gemma4 \
    --tool-call-parser gemma4 \
    --async-scheduling \
    --enable-prefix-caching \
    --host 0.0.0.0

Notes on Loading with Transformers

If loading with transformers, the following missing-key warnings are expected and harmless due to Gemma 4's share_kv_layer mechanism (layers 24-41 share KV weights from earlier layers):

model = AutoModelForCausalLM.from_pretrained('aqweteddy/gemma-4-E4B-it-text')

# output
Key                                            | Status  |
-----------------------------------------------+---------+-
model.layers.{24...41}.self_attn.v_proj.weight | MISSING |
model.layers.{24...41}.self_attn.k_proj.weight | MISSING |

These do not affect generation results.

Quick Test

curl -s http://localhost:4315/v1/chat/completions \
  -H "Content-Type: application/json" \
  -d '{
    "model": "gemma-4-E4B-it-causal",
    "messages": [{"role": "user", "content": "Hello"}],
    "max_tokens": 100
  }'

Expected response:

{
  "model": "gemma-4-E4B-it-causal",
  "choices": [
    {
      "message": {
        "role": "assistant",
        "content": "Hello! How can I help you today?"
      },
      "finish_reason": "stop"
    }
  ]
}
Downloads last month
671
Safetensors
Model size
8B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for aqweteddy/gemma-4-E4B-it-text

Finetuned
(90)
this model