A Q2_K version of https://huggingface.co/ssweens/deepseek-ai__DeepSeek-V4-Flash-GGUF-YMMV






🧪 Experimental GGUFs for DeepSeek-V4-Flash

A stopgap to experiment with DeepSeek-V4-Flash with CUDA and ROCm locally while the tools ecosystem catches up. Expect rough edges. Validated for text and coding coherence.

GGUF files for deepseek-ai/DeepSeek-V4-Flash.

⚠️ You need the custom fork

These GGUFs require a DeepSeek-V4-capable fork of llama.cpp. Vanilla llama.cpp doesn't support this architecture yet.

Performance

Example:

llama-server -ngl 99 --no-mmap -fa on -np 1 --reasoning-format auto --jinja --threads 3 -ts 4,4,3 -dev CUDA0,CUDA1,CUDA2 \
-m /mnt/supmodels/gguf/deepseek-ai__DeepSeek-V4-Flash/deepseek-ai__DeepSeek-V4-Flash-Q4_K_M.gguf -c 32768 -b 2048 -ub 512 -ctk q8_0 -ctv q8_0

Speed (custom, n=2)

Model Prompt t/s Gen t/s TTFT s Decode s Backend
IQ2_XXS 389.56 24.04 7.59 0.00 CUDA
Q2_K_S 231.15 18.79 10.58 0.00 CUDA+ROCm
BF16 158.58 10.07 18.17 0.00 CUDA+ROCm
Coding (humaneval_instruct, n=30)
Model pass@1 Backend
----- ------ -------
IQ2_XXS 0.967±0.033 CUDA
Q2_K_S 0.900±0.056 CUDA+ROCm
BF16 1.000±0.000 CUDA+ROCm

Original model

Thanks

Downloads last month
309
GGUF
Model size
284B params
Architecture
deepseek4
Hardware compatibility
Log In to add your hardware

2-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Volko76/DeepSeek-V4-Flash-GGUF

Quantized
(38)
this model