qwen2.5-7b-instruct-sft-v1

This repository provides a merged full model produced by supervised fine-tuning for task-oriented instruction following.

Training Objective

Improve instruction following, action consistency, and response reliability in practical workflows.

Training Configuration

  • Method: SFT (TRL SFTTrainer + Transformers, full-model)
  • Max sequence length: 1024
  • Max steps: 6
  • Epochs: 1
  • Learning rate: 2e-6
  • Per-device train batch size: 1
  • Gradient accumulation steps: 8
  • Effective global batch size: 8
  • Training schedule: Step-based (max_steps takes precedence in Trainer).

Usage

from transformers import AutoModelForCausalLM, AutoTokenizer

model_id = "uchkw/qwen2.5-7b-instruct-sft-v1"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto")

Training Data / Sources & License (IMPORTANT)

  • Datasets: rule-based original synthetic data
  • Compliance: Users must comply with the base model's terms of use.

Training Plan Summary

  • Objective: Push DB score upward aggressively while preserving ALF above baseline.
  • Strategy:
    • Increase DB rows strongly and keep action rows moderate.
    • Use stronger SQL stage and very light action stage.
Downloads last month
7
Safetensors
Model size
8B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for uchkw/qwen2.5-7b-instruct-sft-v1

Base model

Qwen/Qwen2.5-7B
Finetuned
(3210)
this model