This repository contains the LoRA adapter weights for SkinR1, a dermatology-focused vision–language model that enhances clinical reasoning, lesion differentiation, and hierarchical dermatological diagnosis.
This repository provides:
adapter_config.jsonadapter_model.safetensors
These LoRA weights must be loaded on top of the base model.
Load the LoRA adapter with PEFT, and inference
from transformers import AutoModelForVision2Seq, AutoProcessor
from peft import PeftModel
# base model
base_model = "Qwen/Qwen2.5-VL-7B-Instruct"
# lora weights
lora_id = "zml5418/SkinR1-Qwen2.5-VL-7B-LoRA"
# Load processor
processor = AutoProcessor.from_pretrained(base_model, trust_remote_code=True)
# Load base model
model = AutoModelForVision2Seq.from_pretrained(
base_model,
device_map="auto",
trust_remote_code=True
)
# Load LoRA adapter
model = PeftModel.from_pretrained(model, lora_id)
# Switch to evaluation mode
model = model.eval()
### Run inference
from PIL import Image
messages = [
{
"role": "user",
"content": [
{"type": "image", "image": Image.open("your_image.jpg")},
{
"type": "text",
"text": "What abnormality is present in this dermatology image?"
}
]
}
]
# Prepare inputs
text = processor.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
inputs = processor(
images=messages[0]["content"][0]["image"],
text=text,
return_tensors="pt"
).to(model.device)
# Generate diagnosis
outputs = model.generate(
**inputs,
max_new_tokens=256,
do_sample=False
)
print(processor.batch_decode(outputs, skip_special_tokens=True)[0])
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support
Model tree for zml5418/SkinR1-Qwen2.5-VL-7B-LoRA
Base model
Qwen/Qwen2.5-VL-7B-Instruct