DPO-Shift: Shifting the Distribution of Direct Preference Optimization
Paper • 2502.07599 • Published • 15
This is a model released from the preprint: DPO-Shift: Shifting the Distribution of Direct Preference Optimization. Please refer to our repository for more details.
This model is a fine-tuned version of NoManDeRY/DPO-Shift-Qwen-2-7B-UltraChat200K-SFT on the HuggingFaceH4/ultrafeedback_binarized dataset. It achieves the following results on the evaluation set:
More information needed
More information needed
More information needed
The following hyperparameters were used during training:
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Dpo Lambda | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 0.6842 | 0.1047 | 50 | 0.6827 | 0.1496 | 0.1349 | 0.9500 | 0.6865 | 0.0147 | -293.2452 | -318.3515 | -1.2886 | -1.1643 |
| 0.6498 | 0.2093 | 100 | 0.6609 | 0.2769 | 0.2144 | 0.9500 | 0.7381 | 0.0625 | -285.2926 | -305.6219 | -1.2589 | -1.1266 |
| 0.6549 | 0.3140 | 150 | 0.6408 | 0.2288 | 0.0982 | 0.9500 | 0.7341 | 0.1307 | -296.9194 | -310.4279 | -1.3148 | -1.1846 |
| 0.6413 | 0.4186 | 200 | 0.6250 | 0.1619 | -0.0318 | 0.9500 | 0.7381 | 0.1938 | -309.9195 | -317.1192 | -1.2956 | -1.1761 |
| 0.6069 | 0.5233 | 250 | 0.6114 | 0.0886 | -0.1684 | 0.9500 | 0.7302 | 0.2570 | -323.5783 | -324.4538 | -1.2827 | -1.1695 |
| 0.611 | 0.6279 | 300 | 0.5997 | 0.0461 | -0.2674 | 0.9500 | 0.7381 | 0.3135 | -333.4765 | -328.6992 | -1.2575 | -1.1528 |
| 0.6151 | 0.7326 | 350 | 0.5924 | -0.0016 | -0.3586 | 0.9500 | 0.7222 | 0.3570 | -342.5963 | -333.4674 | -1.2391 | -1.1370 |
| 0.5997 | 0.8373 | 400 | 0.5898 | -0.0127 | -0.3884 | 0.9500 | 0.7222 | 0.3758 | -345.5813 | -334.5772 | -1.2248 | -1.1256 |
| 0.5708 | 0.9419 | 450 | 0.5890 | -0.0170 | -0.3976 | 0.9500 | 0.7302 | 0.3806 | -346.4959 | -335.0127 | -1.2190 | -1.1200 |
Base model
Qwen/Qwen2-7B