qwen3-30B-A3B-32-64-5k-no-gate
PEFT LoRA adapter fine-tuned from Qwen/Qwen3-30B-A3B on rl-research/dr-tulu-sft-data.
Training Details
- LoRA rank: 32, alpha: 64
- Target modules: q_proj, v_proj, k_proj, up_proj, down_proj, gate_proj, o_proj
- Trained with LlamaFactory on 2x GPUs, 3 epochs, cosine LR schedule.
Usage
from peft import PeftModel
from transformers import AutoModelForCausalLM, AutoTokenizer
base_model_id = "Qwen/Qwen3-30B-A3B"
base = AutoModelForCausalLM.from_pretrained(base_model_id, torch_dtype="auto", device_map="auto")
tokenizer = AutoTokenizer.from_pretrained(base_model_id)
model = PeftModel.from_pretrained(base, "qwen3-30B-A3B-32-64-5k-no-gate")
- Downloads last month
- 13
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support