qwen35-27b-stage1-cpt
This model was fine-tuned using SFT.
Training procedure
Hyperparameters
| Parameter |
Value |
| Learning rate |
5e-06 |
| LR scheduler |
SchedulerType.CONSTANT_WITH_WARMUP |
| Per-device batch size |
1 |
| Gradient accumulation |
8 |
| Effective batch size |
8 |
| Epochs |
1 |
| Max sequence length |
6144 |
| Optimizer |
OptimizerNames.PAGED_ADAMW_8BIT |
| Weight decay |
0.01 |
| Warmup ratio |
0.03 |
| Max gradient norm |
1.0 |
| Precision |
bf16 |
| Loss type |
nll |
| Chunked cross-entropy |
yes |
LoRA configuration
| Parameter |
Value |
| Rank (r) |
64 |
| Alpha |
64 |
| Target modules |
down_proj, gate_proj, in_proj_a, in_proj_b, in_proj_qkv, in_proj_z, k_proj, o_proj, out_proj, q_proj, up_proj, v_proj |
| rsLoRA |
yes |
| Quantization |
4-bit (nf4) |
Dataset statistics
| Dataset |
Samples |
Total tokens |
Trainable tokens |
| rpDungeon/cpt-combined-filtered |
19,080 |
62,129,715 |
62,129,715 |
Training config
model_name_or_path: Qwen3.5-27B-Derestricted
output_dir: runs/qwen35-27b-stage1-cpt
attn_implementation: flash_attention_2
bf16: true
gradient_checkpointing: true
gradient_checkpointing_kwargs:
use_reentrant: false
use_cce: true
model_parallel: true
max_memory:
0: 18GiB
1: 18GiB
chunked_mlp: true
chunked_mlp_chunks: 8
max_length: 6144
per_device_train_batch_size: 1
gradient_accumulation_steps: 8
use_peft: true
load_in_4bit: true
bnb_4bit_quant_type: nf4
lora_r: 64
lora_alpha: 64
lora_dropout: 0.0
use_rslora: true
lora_target_modules:
- in_proj_qkv
- in_proj_z
- in_proj_a
- in_proj_b
- out_proj
- q_proj
- k_proj
- v_proj
- o_proj
- gate_proj
- up_proj
- down_proj
data_config: configs/qwen35-27b-stage1-cpt/data.yaml
prepared_dataset: runs/qwen35-27b-stage1-cpt/prepared
learning_rate: 5.0e-06
lr_scheduler_type: constant_with_warmup
warmup_ratio: 0.03
weight_decay: 0.01
max_grad_norm: 1.0
optim: paged_adamw_8bit
num_train_epochs: 1
logging_steps: 1
disable_tqdm: false
save_strategy: steps
save_steps: 250
save_total_limit: 3
report_to: none
run_name: qwen35-27b-stage1-cpt
Data config
datasets:
- path: rpDungeon/cpt-combined-filtered
split: train
Framework versions
- PEFT 0.18.1
- Loft: 0.1.0
- Transformers: 5.2.0
- Pytorch: 2.6.0+cu124
- Datasets: 4.6.1
- Tokenizers: 0.22.2