deepseek-coder-6.7b-instruct-zse-int4

Pre-converted ZSE model for ultra-fast inference.

Source Model

Usage

pip install zllm-zse

# Download and serve
zse pull deepseek-coder-6.7b-instruct
zse serve deepseek-coder-6.7b-instruct

# Or direct
zse serve deepseek-coder-6.7b-instruct-zse-int4.zse

Benefits

  • 5x faster cold start compared to HuggingFace loading
  • 10-14% less VRAM with ZSE custom INT4 kernels
  • Single file — tokenizer and config embedded
  • No internet required after download

Benchmarks

See ZSE Documentation for full benchmarks.


Converted with ZSE v1.4.0

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for zse-zllm/deepseek-coder-6.7b-instruct-zse-int4

Finetuned
(60)
this model