Qwen2.5-7B-agent-trajectory-lora_6_2

This repository provides a LoRA adapter fine-tuned from Qwen/Qwen2.5-7B-Instruct using LoRA + Unsloth on A100 80GB GPU.

This repository contains LoRA adapter weights only. The base model must be loaded separately.

Training Objective

This adapter is trained to improve multi-turn agent task performance on ALFWorld (household tasks) and DBBench (database operations).

Loss is applied to all assistant turns in the multi-turn trajectory, enabling the model to learn environment observation, action selection, tool use, and recovery from errors.

Training Configuration (A100 80GB Optimized)

  • Base model: Qwen/Qwen2.5-7B-Instruct
  • GPU: A100 80GB (TF32 enabled, Flash Attention 2)
  • Method: LoRA (full precision base, bf16)
  • Datasets: ALFWorld v5 + DBBench v4 (combined ~5,500 samples)
  • Max sequence length: 8192
  • Epochs: 1
  • Learning rate: 2e-06
  • LoRA: r=64, alpha=128, dropout=0.05
  • Effective batch size: 8 x 2 = 16

Usage

from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
import torch

base    = "Qwen/Qwen2.5-7B-Instruct"
adapter = "your_id/your-repo"

tokenizer = AutoTokenizer.from_pretrained(base)
model = AutoModelForCausalLM.from_pretrained(
    base,
    torch_dtype=torch.bfloat16,
    device_map="auto",
    attn_implementation="flash_attention_2",
)
model = PeftModel.from_pretrained(model, adapter)

Sources & Terms (IMPORTANT)

Training data:

  • u-10bei/sft_alfworld_trajectory_dataset_v5
  • u-10bei/dbbench_sft_dataset_react_v4

Dataset License: MIT License. Users must comply with the MIT license and the base model's original terms of use.

Downloads last month
-
Safetensors
Model size
8B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for legoskier/Qwen2.5-7B-agent-trajectory-lora_6_2

Base model

Qwen/Qwen2.5-7B
Adapter
(1792)
this model

Datasets used to train legoskier/Qwen2.5-7B-agent-trajectory-lora_6_2