Volko76's picture
Update README.md
5f45ca7 verified
metadata
base_model: deepseek-ai/DeepSeek-V4-Flash
base_model_relation: quantized
library_name: llama.cpp
license: mit
pipeline_tag: text-generation
tags:
  - gguf
  - llama.cpp
  - deepseek_v4
  - text-generation
  - conversational
  - en
  - license:mit
  - endpoints_compatible
  - region:us
  - base_model:deepseek-ai/DeepSeek-V4-Flash
  - base_model:quantized:deepseek-ai/DeepSeek-V4-Flash
  - endpoints_compatible
  - region:us

A Q2_K version of https://huggingface.co/ssweens/deepseek-ai__DeepSeek-V4-Flash-GGUF-YMMV






🧪 Experimental GGUFs for DeepSeek-V4-Flash

A stopgap to experiment with DeepSeek-V4-Flash with CUDA and ROCm locally while the tools ecosystem catches up. Expect rough edges. Validated for text and coding coherence.

GGUF files for deepseek-ai/DeepSeek-V4-Flash.

⚠️ You need the custom fork

These GGUFs require a DeepSeek-V4-capable fork of llama.cpp. Vanilla llama.cpp doesn't support this architecture yet.

Performance

Example:

llama-server -ngl 99 --no-mmap -fa on -np 1 --reasoning-format auto --jinja --threads 3 -ts 4,4,3 -dev CUDA0,CUDA1,CUDA2 \
-m /mnt/supmodels/gguf/deepseek-ai__DeepSeek-V4-Flash/deepseek-ai__DeepSeek-V4-Flash-Q4_K_M.gguf -c 32768 -b 2048 -ub 512 -ctk q8_0 -ctv q8_0

Speed (custom, n=2)

Model Prompt t/s Gen t/s TTFT s Decode s Backend
IQ2_XXS 389.56 24.04 7.59 0.00 CUDA
Q2_K_S 231.15 18.79 10.58 0.00 CUDA+ROCm
BF16 158.58 10.07 18.17 0.00 CUDA+ROCm
Coding (humaneval_instruct, n=30)
Model pass@1 Backend
----- ------ -------
IQ2_XXS 0.967±0.033 CUDA
Q2_K_S 0.900±0.056 CUDA+ROCm
BF16 1.000±0.000 CUDA+ROCm

Original model

Thanks