Qwen3-32B DFlash Draft Model (EagleChat 400K Mix)

A DFlash draft model trained from Qwen3-32B using a EagleChat subset (English 200K + Chinese 200K) to accelerate speculative decoding.


Model Summary

This repository provides a DFlash draft model for Qwen3-32B. The draft model is intended to be used together with the target model in SpecForge, improving throughput (output tokens/sec) under standard speculative verification.

  • Base / Target model: Qwen/Qwen3-32B
  • Draft model type: DFlash (speculative decoding draft)
  • Training data: EagleChat subset (English 200K + Chinese 200K; total ~400K)
  • Training hardware: H100
  • Primary use case: accelerate inference with DFlash / SpecForge


Training Details

Data

  • Dataset: EagleChat subset
  • Composition:
    • English: ~200,000 samples
    • Chinese: 200,000 samples
  • Total: ~400,000 samples

Procedure

  • Epochs: 6
  • Sequence length: 4096
  • Precision: bf16
  • Codebase: SpecForge (DFlash training)

Evaluation

Benchmark settings

  • Target model: /models/Qwen3-32B
  • Draft model: sx-aicp/qwen3-32b-dflash-en-zh (or local path)
  • Max new tokens: 2048
  • Attention backend: fa3
  • Tensor parallel (tp_size): 4
  • device_sm: 90 (H100)
  • drop_first_batch: true
  • Concurrencies: 1 / 4 / 32 (varies by suite)

Speed Bench Results

Environment: H100 (SM90), tp=4, attention=fa3, max_new_tokens=2048, drop_first_batch=true.

Unified Summary

Benchmark Conc=1 Conc=4 Conc=32
Math500 109.20 → 392.63
3.595× / L=5.564
409.44 → 1351.51
3.301× / L=5.582
2554.68 → 4554.81
1.783× / L=5.588
HumanEval 108.93 → 331.66
3.045× / L=4.769
407.34 → 1129.16
2.772× / L=4.756
2482.40 → 3632.36
1.463× / L=4.757
MT-Bench 109.19 → 233.75
2.141× / L=3.791
409.97 → 804.64
1.963× / L=3.852
2470.75 → 2767.16
1.120× / L=3.917

Format: baseline tok/s → DFlash tok/s; Speedup× / L(acceptance length).

How to Evaluate (z-lab / dflash)

python benchmark_sglang.py \
  --tp-size 4 \
  --target-model /models/Qwen3-32B \
  --draft-model /path/to/draft_model \
  --concurrencies 1,4,32 \
  --dataset-name math500 \
  --attention-backends fa3 \
  --output-md sglang_results.md
Downloads last month
260
Safetensors
Model size
0.5B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for AICP-Labs/qwen3-32b-dflash-en-zh

Base model

Qwen/Qwen3-32B
Finetuned
(476)
this model

Collection including AICP-Labs/qwen3-32b-dflash-en-zh