OmniCoder-9B GPTQ Int8
GPTQ INT8 quantization of Tesslate/OmniCoder-9B — a VLM (Vision-Language Model) for agentic coding with image understanding. Higher quality variant with minimal quality loss.
Architecture
- Type:
Qwen3_5ForConditionalGeneration(VLM backbone) - Base: Qwen3.5-9B hybrid (Gated DeltaNet + full attention, 32 layers)
- Vision encoder: Preserved in BF16 (not quantized) — full image understanding capability
- Fine-tuned on: 425K agentic coding trajectories (LoRA r=64, alpha=32)
- Features: Agentic coding, tool calling, reasoning, long context (262K+), image input
Quantization
- Method: GPTQ via GPTQModel
- Bits: 8, Group: 128, Sym: True
- Calibration: 256 samples from allenai/c4
- Only MLP/FFN layers quantized: gate_proj, up_proj, down_proj
- Kept in BF16: lm_head, embed_tokens, all attention (DeltaNet + full), MTP, vision encoder
- Size: ~13.1 GB (INT8 text model + BF16 vision encoder)
Serving (vLLM >= 0.18.0)
vllm serve raydelossantos/OmniCoder-9B-GPTQ-Int8 \
--dtype float16 \
--trust-remote-code \
--enable-prefix-caching \
--tool-call-parser qwen3_coder \
--reasoning-parser qwen3 \
--enable-auto-tool-choice
Important flags
| Flag | Why |
|---|---|
--enable-prefix-caching |
Recommended — enables KV cache reuse for repeated system prompts |
--dtype float16 |
Better throughput on Ampere GPUs (BF16 weights cast to FP16) |
--trust-remote-code |
Required for Qwen3.5 model type |
Note:
--enforce-eageris not required on vLLM >= 0.18.0. The DeltaNet dtype mismatch was fixed in PR #35256. CUDA graphs with piecewise mode work correctly and provide ~3-4x speedup over eager mode.
Multi-GPU (Tensor Parallel)
# 2x RTX 3060 or similar — fits with TP=2
vllm serve raydelossantos/OmniCoder-9B-GPTQ-Int8 \
--tensor-parallel-size 2 \
--dtype float16 \
--trust-remote-code \
--enable-prefix-caching \
--tool-call-parser qwen3_coder \
--reasoning-parser qwen3 \
--enable-auto-tool-choice
Weight Structure
Weights use the Qwen3_5ForConditionalGeneration layout:
model.language_model.*— quantized text model (GPTQ INT8)model.visual.*— vision encoder (BF16, from base model)lm_head.*— language model head (BF16)
- Downloads last month
- 106