Update SGLang deployment guide: point to K2.6 cookbook and use stable release
Browse filesThe SGLang section previously linked to the Kimi-K2.5 cookbook and required a `git+https://github.com/sgl-project/sglang.git` install from `main`. Kimi-K2.6 now has a dedicated cookbook page and is supported in the SGLang v0.5.10 stable release (available on PyPI), so a nightly/main build is no longer required.
Changes (SGLang section only):
- Update cookbook URL: Kimi-K2.5 -> Kimi-K2.6 (https://cookbook.sglang.io/autoregressive/Moonshotai/Kimi-K2.6)
- Replace "latest main" git install with `pip install "sglang[all]>=0.5.10"`
- Drop the separate `nvidia-cudnn-cu12==9.16.0.29` pin (bundled with the stable wheel)
No changes to vLLM or KTransformers sections.
- docs/deploy_guidance.md +4 -5
docs/deploy_guidance.md
CHANGED
|
@@ -28,16 +28,15 @@ vllm serve $MODEL_PATH -tp 8 --mm-encoder-tp-mode data --trust-remote-code --too
|
|
| 28 |
|
| 29 |
## SGLang Deployment
|
| 30 |
|
| 31 |
-
You can refer to https://cookbook.sglang.io/autoregressive/Moonshotai/Kimi-K2.
|
| 32 |
|
| 33 |
-
This model is
|
| 34 |
|
| 35 |
```
|
| 36 |
-
pip install "sglang
|
| 37 |
-
pip install nvidia-cudnn-cu12==9.16.0.29
|
| 38 |
```
|
| 39 |
|
| 40 |
-
|
| 41 |
``` bash
|
| 42 |
sglang serve --model-path $MODEL_PATH --tp 8 --trust-remote-code --tool-call-parser kimi_k2 --reasoning-parser kimi_k2
|
| 43 |
```
|
|
|
|
| 28 |
|
| 29 |
## SGLang Deployment
|
| 30 |
|
| 31 |
+
You can refer to https://cookbook.sglang.io/autoregressive/Moonshotai/Kimi-K2.6 for the newest deployment guide.
|
| 32 |
|
| 33 |
+
This model is supported in SGLang v0.5.10 and later stable releases (no nightly / main build required):
|
| 34 |
|
| 35 |
```
|
| 36 |
+
pip install "sglang[all]>=0.5.10"
|
|
|
|
| 37 |
```
|
| 38 |
|
| 39 |
+
Here is the example for it to run with TP8 on H200 in a single node via SGLang:
|
| 40 |
``` bash
|
| 41 |
sglang serve --model-path $MODEL_PATH --tp 8 --trust-remote-code --tool-call-parser kimi_k2 --reasoning-parser kimi_k2
|
| 42 |
```
|