--- base_model: deepseek-ai/DeepSeek-V4-Flash base_model_relation: quantized library_name: llama.cpp license: mit pipeline_tag: text-generation tags: - gguf - llama.cpp - deepseek_v4 - text-generation - conversational - en - license:mit - endpoints_compatible - region:us - base_model:deepseek-ai/DeepSeek-V4-Flash - base_model:quantized:deepseek-ai/DeepSeek-V4-Flash - endpoints_compatible - region:us --- A Q2_K version of https://huggingface.co/ssweens/deepseek-ai__DeepSeek-V4-Flash-GGUF-YMMV --- --- --- --- --- ## 🧪 Experimental GGUFs for DeepSeek-V4-Flash A stopgap to experiment with DeepSeek-V4-Flash with CUDA and ROCm locally while the tools ecosystem catches up. Expect rough edges. Validated for text and coding coherence. GGUF files for [deepseek-ai/DeepSeek-V4-Flash](https://huggingface.co/deepseek-ai/DeepSeek-V4-Flash). ### ⚠️ You need the custom fork These GGUFs **require** a DeepSeek-V4-capable fork of llama.cpp. Vanilla llama.cpp doesn't support this architecture yet. - **llama.cpp fork:** [ssweens/llama.cpp-deepseek-v4](https://github.com/ssweens/llama.cpp-deepseek-v4) - **Backends:** Tested on CUDA and ROCm. ## Performance Example: ``` llama-server -ngl 99 --no-mmap -fa on -np 1 --reasoning-format auto --jinja --threads 3 -ts 4,4,3 -dev CUDA0,CUDA1,CUDA2 \ -m /mnt/supmodels/gguf/deepseek-ai__DeepSeek-V4-Flash/deepseek-ai__DeepSeek-V4-Flash-Q4_K_M.gguf -c 32768 -b 2048 -ub 512 -ctk q8_0 -ctv q8_0 ``` **Speed (custom, n=2)** | Model | Prompt t/s | Gen t/s | TTFT s | Decode s | Backend | | ----- | ---------- | ------- | ------ | -------- | ------- | | IQ2_XXS | 389.56 | 24.04 | 7.59 | 0.00 | CUDA | | Q2_K_S | 231.15 | 18.79 | 10.58 | 0.00 | CUDA+ROCm | | BF16 | 158.58 | 10.07 | 18.17 | 0.00 | CUDA+ROCm | **Coding (humaneval_instruct, n=30)** | Model | pass@1 | Backend | | ----- | ------ | ------- | | IQ2_XXS | 0.967±0.033 | CUDA | | Q2_K_S | 0.900±0.056 | CUDA+ROCm | | BF16 | 1.000±0.000 | CUDA+ROCm | ## Original model - [deepseek-ai/DeepSeek-V4-Flash](https://huggingface.co/deepseek-ai/DeepSeek-V4-Flash) ## Thanks - [antirez](https://github.com/antirez) — llama.cpp fork for Metal and CUDA in [llama.cpp-deepseek-v4-flash](https://github.com/antirez/llama.cpp-deepseek-v4-flash) - [ml-explore/mlx-lm #1192](https://github.com/ml-explore/mlx-lm/pull/1192) — MLX DSV4 attention reference that informed the architecture work - [DeepSeek](https://github.com/deepseek-ai) — open inference code and the [technical report](https://huggingface.co/deepseek-ai/DeepSeek-V4-Pro/blob/main/DeepSeek_V4.pdf) - [nisparks et al](https://github.com/ggml-org/llama.cpp/issues/22319) - some early implementation efforts and discussion - [llama.cpp](https://github.com/ggml-org/llama.cpp) — the project that makes local LLM inference possible