kylesayrs commited on
Commit
b159aa5
·
verified ·
1 Parent(s): cb1a852

Create README.md

Browse files

<h1 style="display: flex; align-items: center; gap: 10px; margin: 0;">
DeepSeek-V4-Flash-NVFP4-FP8
</h1>

## Model Optimizations

This model was obtained by using the following branch with LLM Compressor: https://github.com/vllm-project/llm-compressor/pull/2647

## Deployment

This model was deployed using the following branch with vLLM: https://github.com/vllm-project/vllm/pull/41276
```bash
VLLM_NVFP4_GEMM_BACKEND=marlin vllm serve RedHatAI/DeepSeek-V4-Flash-NVFP4-FP8 --tensor-parallel-size 4 --port 8089 --kv_cache_dtype="fp8"
```

## Evaluation
```bash
python tests/evals/gsm8k/gsm8k_eval.py
```

```
Results:
Accuracy: 0.910
Invalid responses: 0.000
Total latency: 173.006 s
Questions per second: 7.624
Total output tokens: 116217
Output tokens per second: 671.752
```

For more details on how this model was created and run in LLM Compressor, please contact Kyle Sayers on the vLLM Slack: https://communityinviter.com/apps/vllm-dev/join-vllm-developers-slack

Files changed (1) hide show
  1. README.md +8 -0
README.md ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ base_model:
4
+ - deepseek-ai/DeepSeek-V4-Flash
5
+ library_name: transformers
6
+ tags:
7
+ - compressed-tensors
8
+ ---