Instructions to use nvidia/Efficient-DLM-8B with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use nvidia/Efficient-DLM-8B with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="nvidia/Efficient-DLM-8B", trust_remote_code=True) messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("nvidia/Efficient-DLM-8B", trust_remote_code=True) model = AutoModel.from_pretrained("nvidia/Efficient-DLM-8B", trust_remote_code=True) messages = [ {"role": "user", "content": "Who are you?"}, ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use nvidia/Efficient-DLM-8B with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "nvidia/Efficient-DLM-8B" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "nvidia/Efficient-DLM-8B", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/nvidia/Efficient-DLM-8B
- SGLang
How to use nvidia/Efficient-DLM-8B with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "nvidia/Efficient-DLM-8B" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "nvidia/Efficient-DLM-8B", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "nvidia/Efficient-DLM-8B" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "nvidia/Efficient-DLM-8B", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Docker Model Runner
How to use nvidia/Efficient-DLM-8B with Docker Model Runner:
docker model run hf.co/nvidia/Efficient-DLM-8B
Efficient-DLM-8B
📄 Tech Report | 🤗 Efficient-DLM-4B | 🤗 Efficient-DLM-8B
Model Overview
Efficient-DLM-8B is a base diffusion language model designed for parallel generation. It converts pretrained AR LMs into diffusion LMs through efficient continuous pretraining, enabling faster decoding while preserving the task accuracy of strong AR models. Efficient-DLM features block-wise attention with clean-context conditioning for KV-cache-friendly decoding, as well as position-dependent token masking to reduce the training–test mismatch in diffusion generation. See our paper for more technical details.
Environment
transformers>=4.52.2
Chat with Efficient-DLM-8B
from transformers import AutoModel, AutoTokenizer
import torch
repo_name = "nvidia/Efficient-DLM-8B"
tokenizer = AutoTokenizer.from_pretrained(repo_name, trust_remote_code=True)
model = AutoModel.from_pretrained(repo_name, trust_remote_code=True)
model = model.cuda().to(torch.bfloat16)
user_input = input("User: ").strip()
prompt_ids = tokenizer(user_input, return_tensors="pt").input_ids.to(device="cuda")
out_ids, nfe = model.generate(
prompt_ids,
max_new_tokens=128,
steps=128,
block_length=32,
shift_logits=False,
temperature=0.7,
threshold=0.9,
)
response = tokenizer.batch_decode(out_ids[:, prompt_ids.shape[1]:], skip_special_tokens=True)[0]
print(f"Model: {response}")
print(f"[Num Function Eval (NFE)={nfe}]")
Citation
@article{fu2025efficient,
title={Efficient-dlm: From autoregressive to diffusion language models, and beyond in speed},
author={Fu, Yonggan and Whalen, Lexington and Ye, Zhifan and Dong, Xin and Diao, Shizhe and Liu, Jingyu and Wu, Chengyue and Zhang, Hao and Xie, Enze and Han, Song and others},
journal={arXiv preprint arXiv:2512.14067},
year={2025}
}
- Downloads last month
- 417
Install from pip and serve model
# Install vLLM from pip: pip install vllm# Start the vLLM server: vllm serve "nvidia/Efficient-DLM-8B"# Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "nvidia/Efficient-DLM-8B", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'