xinhe commited on
Commit
192e0d7
·
verified ·
1 Parent(s): 647f689

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -0
README.md CHANGED
@@ -12,6 +12,8 @@ This model is an int4 model with group_size 128 of [deepseek-ai/DeepSeek-V4-Flas
12
 
13
  ## How to Run Locally
14
 
 
 
15
  Please refer to the [inference](inference/README.md) folder for detailed instructions on running DeepSeek-V4 locally, including model weight conversion and interactive chat demos.
16
 
17
  For local deployment, we recommend setting the sampling parameters to `temperature = 1.0, top_p = 1.0`. For the Think Max reasoning mode, we recommend setting the context window to at least **384K** tokens.
 
12
 
13
  ## How to Run Locally
14
 
15
+ **vLLM and Sglang is not supported currently: https://huggingface.co/Intel/DeepSeek-V4-Flash-W4A16-AutoRound/discussions/1**
16
+
17
  Please refer to the [inference](inference/README.md) folder for detailed instructions on running DeepSeek-V4 locally, including model weight conversion and interactive chat demos.
18
 
19
  For local deployment, we recommend setting the sampling parameters to `temperature = 1.0, top_p = 1.0`. For the Think Max reasoning mode, we recommend setting the context window to at least **384K** tokens.