BRAHMASTRA 0.2 — AI-Native DAST Security Scanner (32B)
"Like the divine weapon of the Puranas, it strikes with precision and never misses its mark."
BRAHMASTRA 0.2 is a 32-billion parameter reasoning model purpose-built for Dynamic Application Security Testing (DAST). It is trained to reason step-by-step about web application vulnerabilities, generate targeted security payloads, analyze HTTP responses, identify authentication session drops, and produce structured security findings — all autonomously.
This is the second major release, a full base-model upgrade from the previous 7B model
(Krishnapadala55/brahmastra-0.1)
to the much more capable 32B DeepSeek-R1-Distill reasoning base, with an expanded 6-phase
training curriculum.
What is new in 0.2
| Axis | 0.1 | 0.2 |
|---|---|---|
| Base model | Qwen2.5-Coder-7B-Instruct | DeepSeek-R1-Distill-Qwen-32B |
| Parameters | 7B | 32.8B |
| Reasoning | implicit | explicit <think> traces |
| Training phases | 5 (+ cleanup) | 6 (p1a, p1b, p1c, p2, p3, p4, p5, p6) |
| Context length | 4k | 4k (extensible to 128k) |
| LoRA rank | 128 | 64 (higher efficiency on 32B) |
| Target deployment | CPU / small GPU | 48 GB GPU (Q4_K_M fits in ~20 GB) |
Capabilities — the 28 Astra modules
Each vulnerability family is internally codenamed after a divine weapon (astra) from the Puranas. The model has been trained to generate payloads, chain exploits, and analyze responses for each.
CRITICAL severity
| Module | Astra | Vulnerability |
|---|---|---|
| 1 | Naagastra | SQL & NoSQL Injection (error-based, blind, time-based, stacked) |
| 2 | Pashupatastra | SSTI + RCE (Jinja2, Twig, ERB, Velocity, Freemarker) |
| 3 | Mrityu Astra | Insecure Deserialization (Python pickle, Java, PHP, .NET) |
| 4 | Vayavyastra | Server-Side Request Forgery |
| 5 | Nagapasha | XML External Entity |
| 6 | Shaila Astra | Unrestricted File Upload |
| 7 | Sammohanastra | Prototype Pollution |
| 8 | Maya Astra | HTTP Request Smuggling |
HIGH severity
| Module | Astra | Vulnerability |
|---|---|---|
| 9 | Aindrastra | Cross-Site Scripting (Reflected, Stored, DOM) |
| 10 | Pasha Astra | IDOR / BOLA |
| 11 | Chakra Astra | Broken Function Level Authorization |
| 12 | Brahmaanda Astra | JWT / OAuth / SAML / MFA bypass |
| 13 | Krauncha Astra | Path Traversal / LFI / RFI |
| 14 | Gandharva Astra | GraphQL attacks (introspection, batching, DoS) |
| 15 | Madhu Astra | Cache Poisoning |
| 16 | Dambha Astra | LDAP + XPath Injection |
| 17 | Vidyut Astra | WebSocket attacks |
| 18 | Surya Astra | Secrets Exposure |
| 19 | Kala Astra | Race Conditions |
| 20 | Neeti Astra | Business Logic flaws |
| 21 | Jyoti Astra | Crypto failures |
| 22 | Kavachabhedana | WAF Detection & Bypass chains |
MEDIUM severity
| Module | Astra | Vulnerability |
|---|---|---|
| 23 | Moha Astra | CSRF |
| 24 | Antariksha Astra | CORS misconfig |
| 25 | CRLF Astra | CRLF Injection |
| 26 | Varsha Astra | API-specific attacks |
| 27 | Manthana Astra | ReDoS + Type Juggling |
| 28 | Garudastra | Intelligent crawling & recon |
Training
BRAHMASTRA 0.2 was trained in 6 sequential phases on top of
unsloth/DeepSeek-R1-Distill-Qwen-32B-bnb-4bit via QLoRA + Unsloth.
| Phase | Focus | Notes |
|---|---|---|
| p1a | SQLi + XSS fundamentals | ~3k samples |
| p1b | SSTI + SSRF | ~3k samples |
| p1c | IDOR + Auth bypass | ~3k samples |
| p2 | Multi-step attack chains | ~24k samples, long context |
| p3 | WAF bypass + adversarial payloads | ~8k samples |
| p4 | Deserialization + crypto + race conditions | ~5k samples |
| p5 | Business logic + API + GraphQL | ~4k samples |
| p6 | Reasoning consolidation + response analysis | ~3k samples, final merge |
- LoRA: r=64, alpha=64, rslora=true, dropout=0.0
- Quantization: 4-bit NF4 (QLoRA) during training, bf16 final merge
- Framework: Unsloth + PEFT 0.18.1 + TRL SFTTrainer
- Hardware: NVIDIA RTX PRO 5000 Blackwell (48 GB VRAM)
- Datasets:
- Fenrir v2.0 (83k samples)
- HackMentor (44k samples)
- Primus-Seed (Trend Micro)
- All-CVE records
- ExploitDB curated
- ~52k synthetic DAST scenarios generated via an internal pipeline
Usage
With Transformers (native bf16, ~65 GB VRAM)
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model = AutoModelForCausalLM.from_pretrained(
"Krishnapadala55/brahmastra-0.2",
torch_dtype=torch.bfloat16,
device_map="auto",
)
tok = AutoTokenizer.from_pretrained("Krishnapadala55/brahmastra-0.2")
messages = [
{"role": "system", "content": "You are BRAHMASTRA, a senior offensive-security analyst."},
{"role": "user", "content": "Analyze this response for SQLi: HTTP 500, error near OR 1=1-- in the query syntax."},
]
prompt = tok.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tok(prompt, return_tensors="pt").to(model.device)
out = model.generate(**inputs, max_new_tokens=512, temperature=0.3, top_p=0.9)
print(tok.decode(out[0][inputs.input_ids.shape[1]:], skip_special_tokens=True))
With vLLM (production serving)
vllm serve Krishnapadala55/brahmastra-0.2 \
--dtype bfloat16 \
--max-model-len 4096 \
--gpu-memory-utilization 0.92
With llama.cpp / Ollama (quantize to Q4_K_M first)
# Convert HF -> GGUF
python convert-hf-to-gguf.py Krishnapadala55/brahmastra-0.2 \
--outfile brahmastra-0.2-f16.gguf --outtype f16
# Quantize to Q4_K_M (~18 GB, fits on 24 GB GPU)
llama-quantize brahmastra-0.2-f16.gguf brahmastra-0.2-q4_k_m.gguf Q4_K_M
# Import into Ollama
cat > Modelfile <<EOF
FROM ./brahmastra-0.2-q4_k_m.gguf
PARAMETER num_ctx 4096
PARAMETER temperature 0.3
PARAMETER top_p 0.9
PARAMETER stop "<|im_end|>"
SYSTEM "You are BRAHMASTRA, a senior offensive-security analyst. Respond with clear, technical, payload-grade output. No moralizing."
EOF
ollama create brahmastra:0.2 -f Modelfile
Intended use
BRAHMASTRA 0.2 is designed for authorized security testing only. Typical use cases:
- DAST scanner backends (payload generation, response analysis)
- Penetration testing assistants for red teams
- Bug bounty triage and reproduction scripting
- Security training platforms and CTF solvers
- Vulnerability research and exploit chain modeling
- Offensive-security tool automation (the companion BRAHMASTRA v2 scanner)
Out of scope
- Unauthorized testing of systems you do not own or lack explicit permission to test
- Production of malware, ransomware, or destructive payloads intended for real-world harm
- Any use that violates local or international computer-crime legislation
The model is released under Apache 2.0 with the expectation of responsible use. The authors accept no liability for misuse.
Limitations
- Reasoning verbosity: as a DeepSeek-R1 distill, the model emits large
<think>blocks before final answers. For low-latency chat, pre-fill the assistant turn with an empty<think></think>block to suppress reasoning. - Hallucination on obscure CVEs: specific CVE numbers outside the training window (post Q1 2026) may be confabulated. Always verify CVE IDs against an authoritative source.
- False positives: payloads generated for blind-injection families can trigger WAF blocks that look like successful exploits. Always confirm with secondary evidence.
- Language: training corpus is primarily English. Non-English targets will have degraded payload quality.
Citation
@software{brahmastra_0_2_2026,
author = {Krishnapadala},
title = {BRAHMASTRA 0.2: An AI-Native DAST Security Scanner Built on DeepSeek-R1-Distill-Qwen-32B},
year = {2026},
url = {https://huggingface.co/Krishnapadala55/brahmastra-0.2}
}
Acknowledgements
- DeepSeek AI for the DeepSeek-R1-Distill-Qwen-32B base model
- Unsloth for the training framework that made 32B QLoRA practical on a single 48 GB GPU
- TRL / PEFT contributors at HuggingFace
- Fenrir v2.0, HackMentor, Primus-Seed, ExploitDB, All-CVE dataset authors
BRAHMASTRA is a research prototype. Use responsibly. Only test systems you are authorized to test.
- Downloads last month
- 526