Instructions to use z-lab/gemma-4-31B-it-DFlash with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use z-lab/gemma-4-31B-it-DFlash with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="z-lab/gemma-4-31B-it-DFlash")# Load model directly from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("z-lab/gemma-4-31B-it-DFlash") model = AutoModel.from_pretrained("z-lab/gemma-4-31B-it-DFlash") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use z-lab/gemma-4-31B-it-DFlash with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "z-lab/gemma-4-31B-it-DFlash" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "z-lab/gemma-4-31B-it-DFlash", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/z-lab/gemma-4-31B-it-DFlash
- SGLang
How to use z-lab/gemma-4-31B-it-DFlash with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "z-lab/gemma-4-31B-it-DFlash" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "z-lab/gemma-4-31B-it-DFlash", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "z-lab/gemma-4-31B-it-DFlash" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "z-lab/gemma-4-31B-it-DFlash", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Docker Model Runner
How to use z-lab/gemma-4-31B-it-DFlash with Docker Model Runner:
docker model run hf.co/z-lab/gemma-4-31B-it-DFlash
Missing license?
Does this model have a license?
Cheers.
No, we don't make it public because there are still some issues in the inference engine side. We will release it once all vLLM, SGLang and MLX side inference support ready.
No, we don't make it public because there are still some issues in the inference engine side. We will release it once all vLLM, SGLang and MLX side inference support ready.
Hello. I left you a question in the issues thread on the dflash GitHub repository. I didn't want to write to you here about it – I thought you'd come by soon and answer. Yes, only two days have passed, but I'd like to know your opinion on this:
https://github.com/z-lab/dflash/issues/94
No, we don't make it public because there are still some issues in the inference engine side. We will release it once all vLLM, SGLang and MLX side inference support ready.
Google has released official speculative decoding for their models. Now it's your time to take all the glory for yourself). I'm sure many will be doing inference speed comparisons with Google's official draft model. And if you make it into a video – the whole internet will surely be pushing it.
Sorry for the delay! We’ve been working on inference engine support for this draft model, and it’s now public. For the question about future research or improvement of DFlash, it's quite interesting and I will reply as soon as possible!