Qwen3-8B-LoRA-Merged ðŸ§
This is a QLoRA fine-tuned version of the Qwen/Qwen3-8B model.
It was fine-tuned using 4-bit quantization (NF4) and LoRA adapters for efficient domain adaptation.
🧩 Model Details
- Base Model: Qwen/Qwen3-8B
- Fine-tuning Technique: QLoRA (Low-Rank Adaptation)
- Quantization: 4-bit NF4, bfloat16 compute
- Adapter Rank (r): 64
- LoRA α (alpha): 128
- Dropout: 0.1
- Context Length: 1024 tokens
- Training Steps: 150
- Optimizer: AdamW
- Learning Rate: 5e-5
💬 System Prompt
You are a concise and expert AI assistant specializing in fine-tuning, quantization, and efficient model training.
Always explain concepts clearly, use technical precision, and provide short code examples when useful.
🧠Example Usage
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
model_id = "varunpruthviraj/qwen3-8b-lora-merged"
tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.bfloat16,
device_map="auto"
)
prompt = "Explain how QLoRA differs from traditional fine-tuning."
inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
with torch.no_grad():
outputs = model.generate(**inputs, max_new_tokens=200)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
- Downloads last month
- 4