Tiny random version of DeepSeek-V4.
In development. Might not work!
vllm serve yujiepan/deepseek-v4-tiny-random \
--trust-remote-code \
--block-size 256 \
--kv-cache-dtype fp8 \
--data-parallel-size 1 \
--max-model-len 12000 \
--gpu-memory-utilization 0.5 \
--max-num-seqs 512 \
--max-num-batched-tokens 512 \
--no-enable-flashinfer-autotune \
--compilation-config '{"mode": 0, "cudagraph_mode": "FULL_DECODE_ONLY"}' \
--tokenizer-mode deepseek_v4 \
--tool-call-parser deepseek_v4 \
--enable-auto-tool-choice \
--reasoning-parser deepseek_v4 \
--speculative_config '{"method":"mtp","num_speculative_tokens":1}'
- Downloads last month
- 451
Model tree for yujiepan/deepseek-v4-tiny-random
Base model
deepseek-ai/DeepSeek-V4-Pro
Install from pip and serve model
# Install vLLM from pip: pip install vllm# Start the vLLM server: vllm serve "yujiepan/deepseek-v4-tiny-random"# Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "yujiepan/deepseek-v4-tiny-random", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'