sft-dpo-sft-qwen-cot-merged
This repository provides a merged model fine-tuned from kikansha-Tomasu/sft-dpo-qwen-cot-merged using QLoRA (4-bit, Unsloth).
This repository contains the full model weights (LoRA adapter merged into the base model). You can use this model directly without loading the base model separately.
Training Objective
This adapter is trained to improve structured output accuracy (JSON / YAML / XML / TOML / CSV).
Loss is applied only to the final assistant output, while intermediate reasoning (Chain-of-Thought) is masked.
Training Configuration
- Base model: kikansha-Tomasu/sft-dpo-qwen-cot-merged
- Method: QLoRA (4-bit)
- Max sequence length: 512
- Epochs: 1
- Learning rate: 1e-06
- LoRA: r=64, alpha=128
Usage
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model_id = "your_id/your-repo-name"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.float16,
device_map="auto"
)
# Test inference
prompt = "Your question here"
inputs = tokenizer.apply_chat_template([{'role': 'user', 'content': prompt}], tokenize=True, add_generation_prompt=True, return_tensors="pt").to("cuda")
outputs = model.generate(**inputs, max_new_tokens=512)
print(tokenizer.decode(outputs[0]))
Sources & Terms (IMPORTANT)
Training data: daichira/structured-5k-mix-sft
Dataset License: MIT License. This dataset is used and distributed under the terms of the MIT License. Compliance: Users must comply with the MIT license (including copyright notice) and the base model's original terms of use.
- Downloads last month
- 1
Model tree for kikansha-Tomasu/sft-dpo-sft-qwen-cot-merged
Base model
Qwen/Qwen3-4B-Instruct-2507