ReFusion-8B-ESPO-mu8

ReFusion 8B trained with ESPO mu=8 (ELBO-based Sequence-level Policy Optimization). Achieves 83.1% nonzero rate / 0.394 average reward on 124 test tasks (+22.6pp over SFT). Sequence-level RL with multi-epoch training (mu=8) and PPO clipping.

Paper

Concentrate or Collapse: When Reinforcement Learning Meets Diffusion Language Models for Web Planning

Training Details

  • Dataset: FormFactory (992 train / 124 val / 124 test tasks, 25 form types, 8 domains)
  • Infrastructure: NVIDIA L40S (ReFusion) / A10G (FS-DFM) on Modal.com
  • Framework: PyTorch + PEFT (LoRA/QLoRA)
  • Training prompts: 50 (sequence-level), G=4 rollouts per prompt

Citation

@article{brillian2026flowgrpo,
  title={Concentrate or Collapse: When Reinforcement Learning Meets Diffusion Language Models for Web Planning},
  author={Brillian, Muhammad Enrizky},
  year={2026}
}
Downloads last month

-

Downloads are not tracked for this model. How to track
Video Preview
loading