qwen3-4b-advanced-sft-v18-merged
This repository provides a merged model fine-tuned from Qwen/Qwen3-4B-Instruct-2507.
- Dataset: deepkick/sft_alfworld_v5_action_format
- Method: LoRA SFT (merged into base model)
- Max sequence length: 4096
- Epochs: 1
- Learning rate: 1e-06
- LoRA: r=32, alpha=128
vLLM Compatibility
- LoRA adapter has been merged
- No tokenizer vocabulary modification
- Intended for AgentBench Advanced evaluation
Usage
from transformers import AutoTokenizer, AutoModelForCausalLM
model_id = "deepkick/qwen3-4b-advanced-sft-v18-merged"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map="auto"
)
Data / License Notes
- Dataset: deepkick/sft_alfworld_v5_action_format Please refer to the dataset page for license and terms.
- Base model: Qwen/Qwen3-4B-Instruct-2507 Please comply with the base model's original terms of use.
- Downloads last month
- 2
Model tree for deepkick/qwen3-4b-advanced-sft-v18-merged
Base model
Qwen/Qwen3-4B-Instruct-2507