Active filters: dpo
zake7749/Qwen3-Coder-Next-Open-Code-SFT
Viewer
• Updated • 49.4k • 884
• 12
jondurbin/gutenberg-dpo-v0.1
Viewer
• Updated • 918 • 452
• 162
llamafactory/DPO-En-Zh-20k
Viewer
• Updated • 20k • 329
• 99
yusufbaykaloglu/helpsteer3-tr
Viewer
• Updated • 58.2k • 41
• 1
Crownelius/Creative-Writing-KimiK2.5-Cleaned
Viewer
• Updated • 655 • 63
• 1
datapointai/text-2-image-dpo-human-preferences-full
Viewer
• Updated • 20.8k • 56
• 1
datapointai/text-2-image-dpo-human-preferences
Viewer
• Updated • 5k • 32
• 1
datapointai/text-2-image-dpo-human-preferences-small
Viewer
• Updated • 2.5k • 41
• 1
datapointai/text-2-video-ranking-human-preferences
Viewer
• Updated • 2.01k • 48
• 1
OptiRefine-Official/python-optimization-dpo-sample
Updated • 1
d0rj/synthetic-instruct-gptj-pairwise-ru
Viewer
• Updated • 33.1k • 27
• 2
d0rj/rlhf-reward-datasets-ru
Viewer
• Updated • 81.4k • 42
• 4
Viewer
• Updated • 125k • 42
• 2
d0rj/oasst1_pairwise_rlhf_reward-ru
Viewer
• Updated • 18.9k • 16
• 1
xzuyn/mmlu-auxilary-train-dpo
Viewer
• Updated • 101k • 55
• 2
AlexHung29629/stack-exchange-paired-128K
Viewer
• Updated • 128k • 15
• 1
flyingfishinwater/ultrafeedback_clean
Viewer
• Updated • 175k • 7
• 2
efederici/alpaca-vs-alpaca-orpo-dpo
Viewer
• Updated • 49.2k • 82
• 7
Viewer
• Updated • 183k • 43
• 1
mlabonne/chatml_dpo_pairs
Viewer
• Updated • 12.9k • 60
• 55
Viewer
• Updated • 183k • 10
• 6
argilla/ultrafeedback-binarized-preferences-cleaned
Viewer
• Updated • 60.9k • 8.23k
• 162
ThWu/dpo_highest_n_random
Viewer
• Updated • 182k • 10
• 2
BramVanroy/orca_dpo_pairs_dutch
Viewer
• Updated • 11k • 28
• 6
argilla/ultrafeedback-multi-binarized-preferences-cleaned
Viewer
• Updated • 158k • 77
• 7
Viewer
• Updated • 2.42k • 38
• 10
Viewer
• Updated • 15.3k • 22
• 19
HuggingFaceH4/orca_dpo_pairs
Viewer
• Updated • 12.9k • 3.92k
• 30
5CD-AI/Vietnamese-Intel-orca_dpo_pairs-gg-translated
Viewer
• Updated • 12.9k • 23
• 35
Viewer
• Updated • 17.5k • 145
• 52