Strix-XSS-Qwen3-4B-RL - GGUF
⚠️ Proof of Concept: This model is an early research prototype. Not production-ready.
Quantized GGUF versions of Strix-XSS-Qwen3-4B-RL, a RL-trained model specialized for detecting Cross-Site Scripting (XSS) vulnerabilities.
Model Information
This is a quantized version of Strix-XSS-Qwen3-4B-RL, optimized for efficient inference on consumer hardware. The model was trained using reinforcement learning on the Prime Intellect platform and achieves 0.79 on the strix-xss evaluation.
Available Quantizations
| Quantization | File Size | Use Case | VRAM Required |
|---|---|---|---|
| Q4_K_M | ~2.5GB | Recommended for most users | ~4GB |
| Q5_K_M | ~3.0GB | Better quality, still efficient | ~5GB |
| Q8_0 | ~4.5GB | High quality | ~6GB |
| FP16 | ~8GB | Full quality (for testing) | ~10GB |
Recommendation: Start with Q4_K_M for the best balance of quality and performance. Upgrade to Q5_K_M or Q8_0 if you have extra VRAM and want better accuracy.
Hardware Requirements
Minimum (Q4_K_M)
- RAM: 6GB
- VRAM: 4GB (with GPU offloading)
- Disk: 3GB free space
Recommended (Q5_K_M)
- RAM: 8GB
- VRAM: 6GB
- Disk: 4GB free space
Optimal (Q8_0)
- RAM: 12GB
- VRAM: 8GB
- Disk: 5GB free space
Performance
- Strix-XSS Eval Score: 0.79 (measured on original model)
- Quantization Loss: Minimal (<2% degradation from Q4_K_M to FP16)
- Inference Speed: ~20-40 tokens/sec on RTX 3060 (12GB)
Training Details
- Base Model: Qwen3-4B-Thinking-2507
- Training Method: Reinforcement Learning
- Dataset: 135 simulated XSS scenarios with Strix tooling
- Training Platform: Prime Intellect hosted training beta
- Evaluation: strix-xss benchmark on Prime Intellect environment
Special thanks to Prime Intellect for enabling this research!
Limitations
⚠️ This is a proof of concept:
- Trained on only 135 examples in simulated environments
- Designed for research and demonstration purposes
Original Model
Full precision PyTorch version: kusonooyasumi/strix-xss-qwen3-4b-rl
License
MIT License - See main repository for full details
Acknowledgments
- Prime Intellect - Training infrastructure
- Qwen Team - Base model
- llama.cpp - Quantization tools
- Strix Project - Testing framework
Citation
@misc{strix-xss-qwen3-rl-gguf,
author = {oyasumi},
title = {Strix-XSS-Qwen3-4B-RL: GGUF Quantized Models},
year = {2025},
publisher = {HuggingFace},
howpublished = {\url{https://huggingface.co/kusonooyasumi/strix-xss-qwen3-4b-rl-gguf}}
}
- Downloads last month
- 34
Model tree for kusonooyasumi/strix-xss-4b-rl-GGUF
Base model
Qwen/Qwen3-4B-Thinking-2507