gemma-3-1b-medical-cardiology-lora
LoRA adapter fine-tuned from google/gemma-3-270m-it on a medical cardiology QA dataset
using Tunix (JAX/Flax).
Usage
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
base = AutoModelForCausalLM.from_pretrained("google/gemma-3-270m-it")
model = PeftModel.from_pretrained(base, "lmassaron/gemma-3-1b-medical-cardiology-lora")
tok = AutoTokenizer.from_pretrained("google/gemma-3-270m-it")
- Downloads last month
- 209
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support