Tested DGX Spark setup for Qwen3-Coder-Next-FP8 with Continue/Cline

#4
by ztolley - opened

I wanted to share a tested DGX Spark setup for Qwen/Qwen3-Coder-Next-FP8 aimed at a private coding-assistant workflow in VS Code:

https://github.com/ztolley/dgx-spark-qwen3-coder-next-compose

This is a community-tested reference setup, not an official vendor certification.

Setup summary:

  • Main assistant: Qwen/Qwen3-Coder-Next-FP8
  • Main context: 32768 by default
  • 40960 was also validated as feasible on this hardware
  • Autocomplete sidecar: Qwen/Qwen2.5-Coder-3B
  • Backend: Spark-tuned vLLM path
  • IDE workflow: Continue and Cline via local OpenAI-compatible endpoints

A few measured notes from the default setup:

  • main model uses about 88.8 GiB GPU memory
  • autocomplete uses about 11.0 GiB
  • repeated prompt with prefix caching: 2.37s
  • autocomplete short completion: 1.56s

I also tested Qwen/Qwen2.5-Coder-7B for autocomplete under vLLM. It fit on the box, but it was slower and not clearly better enough to replace 3B as the default.

The repo includes the compose stack, config examples, and the validation notes behind the defaults:
https://github.com/ztolley/dgx-spark-qwen3-coder-next-compose/blob/main/docs/validation-and-decisions.md

If the Qwen team or other users have suggestions for an even better deployment path for Qwen3-Coder-Next on Spark-class hardware, I’d be interested in comparing notes.

Sign up or log in to comment