Qwen3-4B (refortif.ai Obfuscated)
We obfuscated an AI model using a novel post-training transformation. Can you figure out how?
The Challenge
We've published two models on HuggingFace: the original Qwen3-4B and this refortif.ai-obfuscated version of the same model. Your goal: figure out the mathematical transform we applied to the weights.
Key Facts
- The transformation is applied after training. No extra training, fine-tuning, or special training procedure is required.
- The obfuscated model runs on the refortif.ai runtime with minimal performance overhead.
- The complete model never appears in plain form: not at rest, not in transit, and not in VRAM during inference.
- Standard vLLM cannot produce correct output from the obfuscated weights. Try it yourself.
The Models
| Model | Link | |
|---|---|---|
| Original | Qwen3-4B | huggingface.co/Qwen/Qwen3-4B |
| Obfuscated | Qwen3-4B (refortif.ai) | huggingface.co/refortifai/Qwen3-4B-obfuscated |
Download both models, compare the weights, and reverse-engineer the transformation.
Model Details
| Base Model | Qwen/Qwen3-4B |
| Parameters | 4 billion |
| Tensor Type | BF16 |
| Format | Safetensors |
| Hidden Size | 2560 |
| Layers | 36 |
| Attention Heads | 32 (8 KV heads, GQA) |
| Head Dimension | 128 |
| Intermediate Size | 9728 |
| License | Apache 2.0 |
The architecture and config are identical to the original Qwen3-4B. Only the weights have been transformed.
How to Download
huggingface-cli download refortifai/Qwen3-4B-obfuscated --local-dir ./Qwen3-4B-obfuscated
To download the original for comparison:
huggingface-cli download Qwen/Qwen3-4B --local-dir ./Qwen3-4B
Compare the Weights
We've open-sourced a visual diff tool to help you get started. It loads both models one tensor at a time (memory-efficient) and gives you per-layer statistics, cosine similarity, histograms, heatmaps, and more.
github.com/refortif-ai/diffstat
git clone https://github.com/refortif-ai/diffstat.git
cd diffstat
pip install -e .
python -m diff_qwen models/Qwen3-4B models/Qwen3-4B-obfuscated
Then open http://localhost:8787 in your browser.
Try It Yourself
Load the obfuscated model in any standard framework and see what happens:
> The meaning of life is
████████████████████ (garbage output)
The obfuscated weights produce completely unusable output in vLLM, HuggingFace Transformers, or any other standard inference engine. On the refortif.ai runtime, the same weights produce correct, coherent output with minimal overhead.
Submit Your Findings
Think you've cracked it? We want to hear from you. Send us your insights, approaches, and analysis. Partial findings are welcome too.
About refortif.ai
Fearless model distribution with zero IP theft risk.
refortif.ai provides post-training model obfuscation that lets you ship your models anywhere without exposing your weights. The obfuscated model is mathematically unusable without the refortif.ai runtime, but runs with near-zero performance overhead when properly deployed.
Want to obfuscate your own models? contact@refortif.ai
- Downloads last month
- 565