SELEE/qwen3-4b-agent-v3-lora-v7
This repository provides a merged model fine-tuned from SELEE/qwen3-4b-agent-v3-lora-v2 using LoRA + Unsloth.
Because the LoRA weights have been merged, this repository contains the full model weights. The model can be loaded directly without needing to load the base model and adapter separately.
Training Objective
The primary objective of this fine-tuning phase is to further improve performance on DBBench.
To achieve this, the model was trained on SELEE/dbbench_trajectories_teacher_v2, trajectories generated by using Qwen/Qwen3-30B-A3B-Instruct-2507-FP8 as a teacher model on the original u-10bei/dbbench_sft_dataset_react_v4 dataset.
To avoid catastrophic forgetting, a portion of the ALFWorld dataset (u-10bei/sft_alfworld_trajectory_dataset_v5) is included as replay during training.
Loss is applied to all assistant turns in the multi-turn trajectory, enabling the model to learn environment observation, action selection, tool use, and recovery from errors.
Training Configuration
- Base model: SELEE/qwen3-4b-agent-v3-lora-v2
- Method: LoRA (merged into full precision base)
- Max sequence length: 4096
- Epochs: 2
- Learning rate: 1e-06
- LoRA: r=128, alpha=128
Usage
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
import torch
base = "SELEE/qwen3-4b-agent-v3-lora-v7"
tokenizer = AutoTokenizer.from_pretrained(base)
model = AutoModelForCausalLM.from_pretrained(
base,
torch_dtype=torch.float16,
device_map="auto",
)
Sources & Terms (IMPORTANT)
Training data: u-10bei/dbbench_sft_dataset_react_v4, SELEE/dbbench_trajectories_teacher_v2
Dataset License: MIT License. This dataset is used and distributed under the terms of the MIT License. Compliance: Users must comply with the MIT license (including copyright notice) and the base model's original terms of use.
- Downloads last month
- -
Model tree for SELEE/qwen3-4b-agent-v3-lora-v7
Base model
Qwen/Qwen3-4B-Instruct-2507