ViT-Small CIFAR-100 (LoRA Fine-tuned)
This model is a vit_small_patch16_224 from timm,
fine-tuned on CIFAR-100 using LoRA (Low-Rank Adaptation) via the
PEFT library.
Training Details
- Base model:
vit_small_patch16_224(ImageNet pretrained) - Dataset: CIFAR-100 (100 classes)
- Method: LoRA injected into attention
qkvlayers - WandB project:
mlops-assignment5
Usage
import torch
import timm
from peft import LoraConfig, get_peft_model
model = timm.create_model("vit_small_patch16_224", pretrained=False, num_classes=100)
lora_config = LoraConfig(r=RANK, lora_alpha=ALPHA, target_modules=["qkv"],
lora_dropout=0.1, bias="none", modules_to_save=["head"])
model = get_peft_model(model, lora_config)
ckpt = torch.load("pytorch_model.pt", map_location="cpu")
model.load_state_dict(ckpt["model_state_dict"])
model.eval()
- Downloads last month
- -