deepseek-coder-6.7b-instruct-zse-int4
Pre-converted ZSE model for ultra-fast inference.
Source Model
- Original: deepseek-ai/deepseek-coder-6.7b-instruct
- Quantization: INT4
- File Size: 3.61 GB
- Format: ZSE binary (.zse)
Usage
pip install zllm-zse
# Download and serve
zse pull deepseek-coder-6.7b-instruct
zse serve deepseek-coder-6.7b-instruct
# Or direct
zse serve deepseek-coder-6.7b-instruct-zse-int4.zse
Benefits
- 5x faster cold start compared to HuggingFace loading
- 10-14% less VRAM with ZSE custom INT4 kernels
- Single file — tokenizer and config embedded
- No internet required after download
Benchmarks
See ZSE Documentation for full benchmarks.
Converted with ZSE v1.4.0
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support
Model tree for zse-zllm/deepseek-coder-6.7b-instruct-zse-int4
Base model
deepseek-ai/deepseek-coder-6.7b-instruct