Qwen3.5-lora-sft-v5-1-64k
This repository contains a LoRA adapter for Qwen/Qwen3.5-397B-A17B, trained with LLaMA-Factory on the amdpilot_v5_1 SFT dataset.
This is an adapter-only release. You need the base model Qwen/Qwen3.5-397B-A17B to use it.
Key training settings
- Fine-tuning method: LoRA
- LoRA rank / alpha:
32 / 64 - Context window:
65536 - Packing:
true - Neat packing:
false - Precision:
bf16 - Distributed setup:
8x AMD MI355X - Epochs:
10
Final metrics
- Final train loss:
0.0630452295144399 - Final eval loss:
0.133148193359375 - Train runtime:
47396.7738s(13.17h)
Eval trajectory
| Step | Epoch | Eval loss |
|---|---|---|
| 10 | 1.7273 | 0.1846 |
| 20 | 3.3636 | 0.1579 |
| 30 | 5.0 | 0.1417 |
| 40 | 6.7273 | 0.1357 |
| 50 | 8.3636 | 0.1336 |
| 60 | 10.0 | 0.1331 |
Dataset coverage note
On the current amdpilot_v5_1 training split, 65536 tokens cover about 82/89 samples (92.13%). This is substantially better coverage than the earlier 32768 setting.
Usage
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
base_model_id = "Qwen/Qwen3.5-397B-A17B"
adapter_id = "JinnP/Qwen3.5-lora-sft-v5-1-64k"
tokenizer = AutoTokenizer.from_pretrained(adapter_id, trust_remote_code=True)
base_model = AutoModelForCausalLM.from_pretrained(base_model_id, trust_remote_code=True)
model = PeftModel.from_pretrained(base_model, adapter_id)
Files
adapter_model.safetensors: LoRA adapter weightsadapter_config.json: PEFT adapter configtokenizer.json/tokenizer_config.json/chat_template.jinja: tokenizer assetsall_results.json/eval_results.json/train_results.json: training metrics
- Downloads last month
- 15
Model tree for JinnP/Qwen3.5-lora-sft-v5-1-64k
Base model
Qwen/Qwen3.5-397B-A17B