Gemma-4 E4B Gemini 3.1 Pro Reasoning Distill

A fine-tuned version of Google's Gemma-4 E4B model, trained on high-quality chain-of-thought reasoning data distilled from Gemini 3.1 Pro.

Model Details

Property Value
Base Model google/gemma-4-E4B-it
Parameters 8B (4B active)
Training Method LoRA (r=8, alpha=8)
Learning Rate 5e-5
Epochs 0.5
Framework Unsloth

Training Data

Combined dataset from:

  • Roman1111111/gemini-3.1-pro-hard-high-reasoning
  • Roman1111111/gemini-3-pro-10000x-hard-high-reasoning

Total: ~13,000 high-quality reasoning examples covering math, logic, coding, and complex problem-solving.

Training Configuration (v2 - Improved)

This model uses conservative hyperparameters to prevent catastrophic forgetting:

# LoRA Configuration
r = 8
lora_alpha = 8
lora_dropout = 0.1

# Training Configuration
learning_rate = 5e-5
num_train_epochs = 0.5
per_device_train_batch_size = 2
gradient_accumulation_steps = 8
weight_decay = 0.01

Evaluation Results

Test Type Score
Simple Math 3/3 (100%)
Logic Reasoning 1/1 (100%)
Complex Problems 6/8 (75%)
Overall Matches base model

Key achievement: This fine-tuned model preserves the base model's capabilities while learning the reasoning style from training data.

Usage

from transformers import AutoModelForCausalLM, AutoTokenizer
import torch

model = AutoModelForCausalLM.from_pretrained(
    "Ayodele01/gemma-4-E4B-Gemini-3.1-Pro-Reasoning-Distill",
    torch_dtype=torch.bfloat16,
    device_map="auto",
)
tokenizer = AutoTokenizer.from_pretrained("Ayodele01/gemma-4-E4B-Gemini-3.1-Pro-Reasoning-Distill")

messages = [{"role": "user", "content": "Solve step by step: If 3x + 7 = 22, what is x?"}]
inputs = tokenizer.apply_chat_template(messages, return_tensors="pt", add_generation_prompt=True)
outputs = model.generate(inputs.to(model.device), max_new_tokens=256)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))

GGUF Versions

For llama.cpp and Ollama users, see: Ayodele01/gemma-4-E4B-Gemini-3.1-Pro-Reasoning-Distill-GGUF

License

This model inherits the Gemma license.

Acknowledgments

  • Google for the Gemma-4 base model
  • Roman1111111 for the reasoning datasets
  • Unsloth for efficient fine-tuning
Downloads last month
35
Safetensors
Model size
8B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for AlekseyCalvin/Lyrical_ru2en_Gemma4_SFT_3960

Finetuned
(95)
this model

Datasets used to train AlekseyCalvin/Lyrical_ru2en_Gemma4_SFT_3960