πŸš€ Finetuned Gemma 3n Model

This is a finetuned version of the Gemma 3n model using Unsloth, built for fast, efficient, and instruction-aligned text generation. Training was completed in 7 epochs on Google Colab A100 GPU, utilizing 4-bit quantization for optimal performance.

πŸ—οΈ Training Details

  • Framework: PyTorch + Hugging Face Transformers + TRL
  • Finetuning Engine: Unsloth
  • Precision: 4-bit (bnb)
  • Epochs: 7
  • GPU: A100 (via Google Colab)
  • LoRA / PEFT: Enabled
  • Training Time: ~ 2:52:02 (hh:mm:ss)

✨ Use Cases

  • Chatbots / Assistants
  • Instruction following
  • Educational tools
  • Question answering
  • Creative writing

πŸ› οΈ Inference Example

from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained("p2kalita/gemma-3n-E4B-it-finetuned")
tokenizer = AutoTokenizer.from_pretrained("p2kalita/gemma-3n-E4B-it-finetuned")

prompt = "Explain quantum mechanics simply."
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=100)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))

This gemma3n model was trained 2x faster with Unsloth and Huggingface's TRL library.

Downloads last month
21
Safetensors
Model size
8B params
Tensor type
BF16
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Dataset used to train p2kalita/gemma-3n-E4B-it-finetuned

Space using p2kalita/gemma-3n-E4B-it-finetuned 1