qwen25-7b-advanced-sft-v0-merged
This repository provides a merged model fine-tuned from Qwen/Qwen2.5-7B-Instruct.
- Dataset: u-10bei/sft_alfworld_trajectory_dataset_v5
- Method: LoRA SFT (merged into base model)
- Max sequence length: 4096
- Epochs: 2
- Learning rate: 2e-06
- LoRA: r=32, alpha=128
vLLM Compatibility
- LoRA adapter has been merged
- No tokenizer vocabulary modification
- Intended for AgentBench Advanced evaluation
Usage
from transformers import AutoTokenizer, AutoModelForCausalLM
model_id = "deepkick/qwen25-7b-advanced-sft-v0-merged"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map="auto"
)
Data / License Notes
- Dataset: u-10bei/sft_alfworld_trajectory_dataset_v5 Please refer to the dataset page for license and terms.
- Base model: Qwen/Qwen2.5-7B-Instruct Please comply with the base model's original terms of use.
- Downloads last month
- 3