Python/DSA Tutor β Gemma-2-2B QLoRA
Fine-tuned version of google/gemma-2-2b-it for Python and DSA tutoring.
Model Description
This model is fine-tuned using QLoRA (4-bit quantization + LoRA adapters) to function as a structured Python/DSA tutor. It follows a consistent Goal β Key Concept β Python Example β Checkpoint Question format.
Training
- Base model: google/gemma-2-2b-it
- Method: QLoRA (SFT with TRL SFTTrainer)
- Dataset: 546 synthetic tutoring examples
- Categories: concept, problem_solving, debugging, confusion, misconception, refusal
- LoRA rank: 16, alpha: 32
- Learning rate: 5e-5
- Epochs: 3
How to Load
from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig
from peft import PeftModel
import torch
bnb_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype=torch.bfloat16,
bnb_4bit_use_double_quant=True,
)
base_model = AutoModelForCausalLM.from_pretrained(
"google/gemma-2-2b-it",
quantization_config=bnb_config,
device_map="auto",
attn_implementation="eager",
)
tokenizer = AutoTokenizer.from_pretrained("Tharun241100/python-dsa-tutor-gemma2-2b-qlora")
model = PeftModel.from_pretrained(
base_model,
"Tharun241100/python-dsa-tutor-gemma2-2b-qlora"
)
model.eval()
Inference
def generate_response(model, tokenizer, prompt, max_new_tokens=400):
formatted = f"<start_of_turn>user\n{prompt}<end_of_turn>\n<start_of_turn>model\n"
inputs = tokenizer(formatted, return_tensors="pt").to(model.device)
with torch.no_grad():
outputs = model.generate(
**inputs,
max_new_tokens=max_new_tokens,
do_sample=True,
temperature=0.7,
top_p=0.9,
repetition_penalty=1.3,
)
return tokenizer.decode(outputs[0][inputs.input_ids.shape[-1]:], skip_special_tokens=True)
response = generate_response(model, tokenizer, "Explain how a stack works in Python")
print(response)
Example Output
Prompt: Explain how a stack works in Python
Response: Goal: You will learn what stacks are and why they are useful.
Key Concept: A stack is like a pile of plates where you can only add or remove from the top (LIFO - Last In First Out).
Python Example: [clean code example]
Checkpoint Question: Can you think of a real-world application that uses a stack?
- Downloads last month
- 26
Inference Providers NEW
This model isn't deployed by any Inference Provider. π Ask for provider support