Update SGLang deployment guide: point to K2.6 cookbook and use stable release

#6
by JustinTong - opened
Files changed (1) hide show
  1. docs/deploy_guidance.md +4 -5
docs/deploy_guidance.md CHANGED
@@ -28,16 +28,15 @@ vllm serve $MODEL_PATH -tp 8 --mm-encoder-tp-mode data --trust-remote-code --too
28
 
29
  ## SGLang Deployment
30
 
31
- You can refer to https://cookbook.sglang.io/autoregressive/Moonshotai/Kimi-K2.5 for the newest deployment guide.
32
 
33
- This model is available in SGLang latest main:
34
 
35
  ```
36
- pip install "sglang @ git+https://github.com/sgl-project/sglang.git#subdirectory=python"
37
- pip install nvidia-cudnn-cu12==9.16.0.29
38
  ```
39
 
40
- Similarly, here is the example for it to run with TP8 on H200 in a single node via SGLang:
41
  ``` bash
42
  sglang serve --model-path $MODEL_PATH --tp 8 --trust-remote-code --tool-call-parser kimi_k2 --reasoning-parser kimi_k2
43
  ```
 
28
 
29
  ## SGLang Deployment
30
 
31
+ You can refer to https://cookbook.sglang.io/autoregressive/Moonshotai/Kimi-K2.6 for the newest deployment guide.
32
 
33
+ This model is supported in SGLang v0.5.10 and later stable releases (no nightly / main build required). `uv` is preferred:
34
 
35
  ```
36
+ uv pip install "sglang>=0.5.10.post1" --prerelease=allow
 
37
  ```
38
 
39
+ Here is the example for it to run with TP8 on H200 in a single node via SGLang:
40
  ``` bash
41
  sglang serve --model-path $MODEL_PATH --tp 8 --trust-remote-code --tool-call-parser kimi_k2 --reasoning-parser kimi_k2
42
  ```