YAML Metadata Warning:empty or missing yaml metadata in repo card
Check out the documentation for more information.
Qwen3 LoRA Adapters Collection
Collection of LoRA adapters extracted from Qwen/Qwen3-4B-Thinking-2507 using different rank and epsilon parameters.
Structure
adapters/
βββ rank8_eps0.01/
βββ rank8_eps0.1/
βββ rank16_eps0.01/
βββ ...
Usage
from peft import PeftModel
from transformers import AutoModelForCausalLM
# Load base model
base_model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen3-4B-Base")
# Apply specific LoRA adapter
lora_model = PeftModel.from_pretrained(base_model, "adapters/rank16_eps0.10")
Parameter Guide
- Rank: Lower = more efficient, Higher = potentially better quality
- Epsilon: Threshold for singular value filtering (lower = more components kept)
Inference Providers NEW
This model isn't deployed by any Inference Provider. π Ask for provider support