🧠 AI Science Tutor (Mistral-7B LoRA)
This repository contains a fine-tuned AI tutoring model built using:
- Base model:
mistralai/Mistral-7B-Instruct-v0.2 - Fine-tuning method: LoRA (PEFT)
- Task: Educational tutoring (step-by-step explanations)
What’s inside?
- LoRA adapter weights (
adapter_model.safetensors) - Adapter config (
adapter_config.json) - Tokenizer (with embedded chat template)
- Ready-to-use inference pipeline
How to Use
Load Model + Tokenizer
from transformers import AutoTokenizer, AutoModelForCausalLM
from peft import PeftModel
model_id = "IronMan19/Fine-tune-science-tutor-mistral-7b-lora"
tokenizer = AutoTokenizer.from_pretrained(model_id)
base_model = AutoModelForCausalLM.from_pretrained(
"mistralai/Mistral-7B-Instruct-v0.2",
device_map="auto",
torch_dtype="auto"
)
model = PeftModel.from_pretrained(base_model, model_id)
model.eval()
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support
Model tree for IronMan19/Fine-tune-science-tutor-mistral-7b-lora
Base model
mistralai/Mistral-7B-Instruct-v0.2