--- library_name: transformers pipeline_tag: text-generation tags: - merlina - grimoire - text-generation - orpo datasets: - hemlang/Hemlock2-DPO base_model: - Qwen/Qwen2.5-Coder-7B-Instruct --- ![image/png](https://huggingface.co/datasets/nbeerbower/cover-images/resolve/main/hemlock_kawaii.png) # Hemlock2-Coder-7B ## Training Configuration | Parameter | Value | |-----------|-------| | Training Mode | ORPO | | Base Model | `Qwen/Qwen2.5-Coder-7B-Instruct` | | Learning Rate | 9e-05 | | Epochs | 2 | | Batch Size | 2 | | Gradient Accumulation | 8 | | Effective Batch Size | 16 | | Max Sequence Length | 2048 | | Optimizer | paged_adamw_8bit | | LR Scheduler | cosine | | Warmup Ratio | 0.05 | | Weight Decay | 0.01 | | Max Grad Norm | 0.3 | | Seed | 42 | | Beta | 0.1 | | Max Prompt Length | 1024 | | LoRA Rank (r) | 128 | | LoRA Alpha | 64 | | LoRA Dropout | 0.05 | | Target Modules | up_proj, down_proj, gate_proj, k_proj, q_proj, v_proj, o_proj | | Quantization | 4-bit (NF4) | | GPU | NVIDIA RTX A6000 | --- ![Trained with Merlina](https://raw.githubusercontent.com/Schneewolf-Labs/Merlina/refs/heads/main/frontend/madewithmerlina_smol.png) [Merlina on GitHub](https://github.com/Schneewolf-Labs/Merlina)