BRAHMASTRA 0.2 — GGUF quantizations
GGUF builds of Krishnapadala55/brahmastra-0.2
for llama.cpp, ollama, and other GGUF-aware runtimes.
The full-precision bf16 model lives in the base repo above. This repo ships three quantized variants that trade file size / VRAM footprint for quality, so the 32.8B parameter DAST reasoning model becomes usable on 24 GB and 48 GB consumer and prosumer GPUs (instead of the 65 GB VRAM the bf16 build needs).
Variants
| File | Bits | Size | Recommended VRAM | Quality vs bf16 |
|---|---|---|---|---|
brahmastra-0.2-Q4_K_M.gguf |
~4.8 bpw | 18.5 GB | 24 GB (4-bit kv cache) | very close, recommended default |
brahmastra-0.2-Q6_K.gguf |
~6.6 bpw | 25.0 GB | 32 GB (f16 kv cache) | near-lossless |
brahmastra-0.2-Q8_0.gguf |
8.0 bpw | 32.4 GB | 48 GB (f16 kv cache) | effectively lossless |
All three variants were produced from the same bf16 safetensors via
llama-quantize at commit HEAD of ggerganov/llama.cpp.
Which one should I use?
- 24 GB GPU (RTX 3090 / 4090 / A5000):
Q4_K_MwithOLLAMA_KV_CACHE_TYPE=q8_0or llama.cpp--cache-type-k q8_0 --cache-type-v q8_0. This is the configuration used by the BRAHMASTRA scanner itself in production — ~20 GB total VRAM, 100% GPU offload, ~50–60 tok/s eval on a 4090. - 32 GB GPU (A100 40GB, RTX 5000 Ada):
Q6_Kif you want the extra quality headroom,Q4_K_Mif you want the extra speed. - 48 GB GPU (RTX 6000 Ada, A6000, A100 40GB, L40):
Q8_0is safe and essentially indistinguishable from bf16 for this task. - Anything smaller (16 GB and below): none of these fit cleanly. Use the bf16 repo with CPU offload, or wait for a Q3_K_M build.
Usage with Ollama
# Pull the Q4_K_M variant (smallest)
huggingface-cli download Krishnapadala55/brahmastra-0.2-GGUF brahmastra-0.2-Q4_K_M.gguf --local-dir .
cat > Modelfile <<'EOF'
FROM ./brahmastra-0.2-Q4_K_M.gguf
TEMPLATE """{{ if .System }}<|im_start|>system
{{ .System }}<|im_end|>
{{ end }}{{ if .Prompt }}<|im_start|>user
{{ .Prompt }}<|im_end|>
{{ end }}<|im_start|>assistant
"""
SYSTEM """You are BRAHMASTRA, a senior offensive-security analyst. Respond with clear, technical, payload-grade output. Use explicit <think> reasoning. No moralizing."""
PARAMETER num_ctx 4096
PARAMETER temperature 0.3
PARAMETER top_p 0.9
PARAMETER repeat_penalty 1.1
PARAMETER stop "<|im_end|>"
PARAMETER stop "<|im_start|>"
EOF
ollama create brahmastra:0.2 -f Modelfile
ollama run brahmastra:0.2 "Blind time-based SQLi payload for MySQL, one line."
For maximum throughput on a 24 GB GPU, set these environment variables on
the Ollama daemon (systemd drop-in or ollama serve env):
OLLAMA_FLASH_ATTENTION=1
OLLAMA_KV_CACHE_TYPE=q8_0
OLLAMA_KEEP_ALIVE=-1
OLLAMA_NUM_PARALLEL=1
Usage with llama.cpp
./llama-cli \
-m brahmastra-0.2-Q4_K_M.gguf \
-c 4096 --flash-attn \
--cache-type-k q8_0 --cache-type-v q8_0 \
-p "<|im_start|>system\nYou are BRAHMASTRA.<|im_end|>\n<|im_start|>user\nBlind time-based SQLi payload for MySQL.<|im_end|>\n<|im_start|>assistant\n"
Model card
See the full model card, training details, 28 Astra module catalogue, and intended-use guidance on the base repo: https://huggingface.co/Krishnapadala55/brahmastra-0.2
License
Apache 2.0, same as the base model. Responsible-use clause applies: only test systems you are authorized to test.
- Downloads last month
- 112
4-bit
6-bit
8-bit
Model tree for Krishnapadala55/brahmastra-0.2-GGUF
Base model
deepseek-ai/DeepSeek-R1-Distill-Qwen-32B