File size: 1,498 Bytes
328e092 3830b8c 328e092 3830b8c 328e092 3830b8c 328e092 3830b8c 328e092 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 | ---
base_model:
- GSAI-ML/LLaDA-8B-Base
language:
- en
pipeline_tag: text-generation
library_name: transformers
---
## Edit-Based Refinement for Parallel Masked Diffusion Language Models
<p align="center">
<a href="https://huggingface.co/papers/2605.09603">📄 Paper</a> •
<a href="https://github.com/renhouxing/ME-DLM">🏠 Repo</a> •
<a href="https://huggingface.co/renhouxing/ME-DLM-Stage3">🤖 Models</a>
</p>
## Introduction
ME-DLM is a lightweight edit-based refinement framework for masked diffusion language models. It first generates a complete response through parallel diffusion decoding, then refines the output with minimal edit operations such as replacement, deletion, and insertion, conditioned on the full sequence. By using edit distance as deterministic training supervision, ME-DLM improves sequence-level consistency while preserving the decoding efficiency of diffusion models. Built on LLaDA, it achieves consistent gains on HumanEval and GSM8K while using only one-eighth of the total diffusion steps.
## Models
| Model | Checkpoint |
|:-------|:------------|
| ME-DLM Stage 1 | 🤗 [HF Link](https://huggingface.co/renhouxing/ME-DLM-Stage1) |
| ME-DLM Stage 2 | 🤗 [HF Link](https://huggingface.co/renhouxing/ME-DLM-Stage2) |
| ME-DLM Stage 3 | 🤗 [HF Link](https://huggingface.co/renhouxing/ME-DLM-Stage3) |
## Acknowledgments
We thank the following amazing projects that truly inspired us:
- [LLaDA](https://github.com/ML-GSAI/LLaDA) |