How to use from
SGLang
Install from pip and serve model
# Install SGLang from pip:
pip install sglang
# Start the SGLang server:
python3 -m sglang.launch_server \
    --model-path "nvidia/Efficient-DLM-4B" \
    --host 0.0.0.0 \
    --port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/chat/completions" \
	-H "Content-Type: application/json" \
	--data '{
		"model": "nvidia/Efficient-DLM-4B",
		"messages": [
			{
				"role": "user",
				"content": "What is the capital of France?"
			}
		]
	}'
Use Docker images
docker run --gpus all \
    --shm-size 32g \
    -p 30000:30000 \
    -v ~/.cache/huggingface:/root/.cache/huggingface \
    --env "HF_TOKEN=<secret>" \
    --ipc=host \
    lmsysorg/sglang:latest \
    python3 -m sglang.launch_server \
        --model-path "nvidia/Efficient-DLM-4B" \
        --host 0.0.0.0 \
        --port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/chat/completions" \
	-H "Content-Type: application/json" \
	--data '{
		"model": "nvidia/Efficient-DLM-4B",
		"messages": [
			{
				"role": "user",
				"content": "What is the capital of France?"
			}
		]
	}'
Quick Links

Efficient-DLM-4B

📄 Tech Report   |   🤗 Efficient-DLM-4B   |   🤗 Efficient-DLM-8B

Model Overview

Efficient-DLM-4B is a base diffusion language model designed for parallel generation. It converts pretrained AR LMs into diffusion LMs through efficient continuous pretraining, enabling faster decoding while preserving the task accuracy of strong AR models. Efficient-DLM features block-wise attention with clean-context conditioning for KV-cache-friendly decoding, as well as position-dependent token masking to reduce the training–test mismatch in diffusion generation. See our paper for more technical details.

Accuracy vs throughput Pareto curve

Environment

transformers>=4.52.2

Chat with Efficient-DLM-4B

from transformers import AutoModel, AutoTokenizer
import torch

repo_name = "nvidia/Efficient-DLM-4B"

tokenizer = AutoTokenizer.from_pretrained(repo_name, trust_remote_code=True)
model = AutoModel.from_pretrained(repo_name, trust_remote_code=True)
model = model.cuda().to(torch.bfloat16)

user_input = input("User: ").strip()

prompt_ids = tokenizer(user_input, return_tensors="pt").input_ids.to(device="cuda")
out_ids, nfe = model.generate(
    prompt_ids,
    max_new_tokens=128,
    steps=128,
    block_length=32,
    shift_logits=False,
    temperature=0.7,
    threshold=0.9,
)

response = tokenizer.batch_decode(out_ids[:, prompt_ids.shape[1]:], skip_special_tokens=True)[0]
print(f"Model: {response}")
print(f"[Num Function Eval (NFE)={nfe}]")

Citation

@article{fu2025efficient,
  title={Efficient-dlm: From autoregressive to diffusion language models, and beyond in speed},
  author={Fu, Yonggan and Whalen, Lexington and Ye, Zhifan and Dong, Xin and Diao, Shizhe and Liu, Jingyu and Wu, Chengyue and Zhang, Hao and Xie, Enze and Han, Song and others},
  journal={arXiv preprint arXiv:2512.14067},
  year={2025}
}
Downloads last month
623
Safetensors
Model size
4B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Paper for nvidia/Efficient-DLM-4B