MedSLM-SFT -- Instruction-Tuned Medical Language Model
Research Only -- Not for Clinical Use
This model is intended for research and educational purposes only. It must not be used for medical diagnosis, treatment recommendations, or any clinical decision-making.
Overview
MedSLM-SFT is a ~330M-parameter medical language model fine-tuned for instruction following and question answering. It was created by applying Supervised Fine-Tuning (SFT) with QLoRA (4-bit quantized LoRA) to the pre-trained base model Saminx22/MedSLM, then merging the LoRA adapters back into the base weights at full fp16 precision.
This repository contains the merged model. It can be loaded directly with AutoModelForCausalLM from the Hugging Face transformers library -- no PEFT dependency is required at inference time.
For the standalone LoRA adapter weights (~17.8 MB), see Saminx22/MedSLM-SFT-LoRA.
Model Details
| Property | Value |
|---|---|
| Base model | Saminx22/MedSLM |
| Architecture | LLaMA-style (RMSNorm, RoPE, SwiGLU, GQA) |
| Parameters | ~330M |
| Model size on disk | ~1.32 GB (fp16) |
| Context length | 1,024 tokens |
| Vocabulary | 50,257 (GPT-2 tokenizer) |
| Fine-tuning method | QLoRA (4-bit NF4 base + LoRA r=16, alpha=32) |
| Trainable parameters during SFT | ~7.1M (3.59% of total) |
| Training data | 46,166 medical QA pairs |
| Training framework | Unsloth + TRL SFTTrainer |
| Hardware | Tesla T4 (15.6 GB VRAM) |
Architecture
The model uses a LLaMA-style transformer architecture:
- RMSNorm pre-normalization
- Rotary Positional Embeddings (RoPE)
- SwiGLU activation in the feed-forward network
- Grouped-Query Attention (GQA) with 16 query heads and 8 key-value heads
The base model was pre-trained from scratch on ~148M tokens of medical text (PubMed abstracts, PMC full texts, and clinical guidelines).
Training Details
Dataset
- Repository:
Saminx22/medical_data_for_slm_SFT - Splits: 46,166 train / 2,565 validation / 2,565 test
- Sources: WikiDoc, medical Q&A corpora
- Average length: ~180 tokens per example
Prompt Template
The model was trained with the following instruction template. You must use this exact format at inference time for best results:
### System:
You are a medical AI assistant. Provide accurate, evidence-based answers to medical questions.
### User:
{question}
### Assistant:
{answer}
SFT Hyperparameters
| Hyperparameter | Value |
|---|---|
| Learning rate | 2e-4 |
| LR scheduler | Cosine decay |
| Warmup ratio | 5% |
| Batch size (per device) | 4 |
| Gradient accumulation steps | 8 |
| Effective batch size | 32 |
| Epochs | 3 |
| Weight decay | 0.01 |
| Max gradient norm | 1.0 |
| Optimizer | AdamW (8-bit) |
| Sequence packing | Enabled |
| Max sequence length | 1,024 tokens |
| Precision | bf16 (fp16 fallback) |
LoRA Configuration
| Parameter | Value |
|---|---|
| Rank (r) | 16 |
| Alpha | 32 |
| Effective scaling (alpha / r) | 2.0 |
| Dropout | 0.0 |
| Target modules | q_proj, k_proj, v_proj, o_proj, gate_proj, up_proj, down_proj |
| Bias | none |
Training Results
| Metric | Value |
|---|---|
| Total training steps | 4,329 |
| Final training loss | 2.4678 |
| Training runtime | ~43 minutes |
| Throughput | 53.4 samples/sec |
How to Use
Requirements
pip install transformers torch accelerate
For optional 4-bit quantized inference (reduces VRAM usage), also install:
pip install bitsandbytes
Quick Start (Full Precision)
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
MODEL_ID = "Saminx22/MedSLM-SFT"
tokenizer = AutoTokenizer.from_pretrained(MODEL_ID)
tokenizer.pad_token = tokenizer.eos_token
model = AutoModelForCausalLM.from_pretrained(
MODEL_ID,
torch_dtype=torch.float16,
device_map="auto",
)
model.eval()
Quick Start (4-bit Quantized)
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig
MODEL_ID = "Saminx22/MedSLM-SFT"
bnb_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_use_double_quant=True,
bnb_4bit_compute_dtype=torch.float16,
)
tokenizer = AutoTokenizer.from_pretrained(MODEL_ID)
tokenizer.pad_token = tokenizer.eos_token
model = AutoModelForCausalLM.from_pretrained(
MODEL_ID,
quantization_config=bnb_config,
device_map="auto",
)
model.eval()
Generating a Response
SYSTEM_PROMPT = (
"You are a medical AI assistant. "
"Provide accurate, evidence-based answers to medical questions."
)
def ask(question: str, max_new_tokens: int = 300) -> str:
prompt = (
f"### System:\n{SYSTEM_PROMPT}\n\n"
f"### User:\n{question}\n\n"
f"### Assistant:\n"
)
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
with torch.inference_mode():
output_ids = model.generate(
**inputs,
max_new_tokens=max_new_tokens,
do_sample=True,
temperature=0.7,
top_p=0.9,
top_k=50,
repetition_penalty=1.1,
pad_token_id=tokenizer.eos_token_id,
)
response = output_ids[0][inputs["input_ids"].shape[1]:]
return tokenizer.decode(response, skip_special_tokens=True).strip()
print(ask("What are the warning signs of a stroke?"))
Recommended Generation Parameters
| Parameter | Value | Notes |
|---|---|---|
temperature |
0.7 | Controls randomness; lower values produce more deterministic output |
top_p |
0.9 | Nucleus sampling threshold |
top_k |
50 | Limits sampling to top-k tokens |
repetition_penalty |
1.1 | Reduces repetitive text |
max_new_tokens |
300 | Maximum response length |
Repository Contents
| File | Description |
|---|---|
config.json |
Model architecture configuration |
model.safetensors |
Model weights in safetensors format (~1.32 GB) |
tokenizer.json |
Tokenizer vocabulary and merges |
tokenizer_config.json |
Tokenizer configuration |
Limitations and Risks
- Research only -- not validated for clinical use or patient care.
- Small model size (~330M parameters); more prone to hallucinations and factual errors than larger models.
- No RLHF, DPO, or other safety alignment has been applied.
- Trained for single-turn question answering only; not designed for multi-turn dialogue.
- Context length limited to 1,024 tokens.
- Training data is English-only; performance on other languages is not expected.
Citation
@misc{medslm-sft-2025,
title = {MedSLM-SFT: Instruction-Tuned Medical Small Language Model},
author = {Saminx22},
year = {2025},
publisher = {Hugging Face},
url = {https://huggingface.co/Saminx22/MedSLM-SFT}
}
Related Repositories
| Repository | Description |
|---|---|
Saminx22/MedSLM |
Pre-trained base model |
Saminx22/MedSLM-SFT-LoRA |
LoRA adapter weights only (~17.8 MB) |
Saminx22/medical_data_for_slm_SFT |
SFT training dataset |
- Downloads last month
- -