qwen3-4b-structured-sft-lora-v06-merged
Fully merged model (base + LoRA) fine-tuned from Qwen/Qwen3-4B-Instruct-2507.
Training Configuration
- Base model: Qwen/Qwen3-4B-Instruct-2507
- Method: QLoRA (4-bit) → merged
- Max sequence length: 512
- Epochs: 2 / LR: 2e-06 / LoRA: r=64, alpha=128
- Dataset: 9 datasets merged & cleaned, 5498 samples (CoT preserved, MASK_COT=1)
- v06 changes: CoT保持 + codeキーワード除去
Usage
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model_id = "deepkick/qwen3-4b-structured-sft-lora-v06-merged"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.float16, device_map="auto")
- Downloads last month
- 2
Model tree for deepkick/qwen3-4b-structured-sft-lora-v06-merged
Base model
Qwen/Qwen3-4B-Instruct-2507