qwen3-4b-advanced-sft-v08-merged
This repository provides a merged model fine-tuned from Qwen/Qwen3-4B-Instruct-2507.
- Dataset: u-10bei/sft_alfworld_trajectory_dataset_v5
- Method: LoRA SFT (merged into base model)
- Max sequence length: 4096
- Epochs: 2
- Learning rate: 1e-06
- LoRA: r=32, alpha=128
vLLM Compatibility
- LoRA adapter has been merged
- No tokenizer vocabulary modification
- Intended for AgentBench Advanced evaluation
Usage
from transformers import AutoTokenizer, AutoModelForCausalLM
model_id = "deepkick/qwen3-4b-advanced-sft-v08-merged"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map="auto"
)
Data / License Notes
- Dataset: u-10bei/sft_alfworld_trajectory_dataset_v5 Please refer to the dataset page for license and terms.
- Base model: Qwen/Qwen3-4B-Instruct-2507 Please comply with the base model's original terms of use.
- Downloads last month
- 3
Model tree for deepkick/qwen3-4b-advanced-sft-v08-merged
Base model
Qwen/Qwen3-4B-Instruct-2507