Noddybear/O10-password-wronganswer-multidomain-lora-qwen3-4b
Organism: O-10
Wrong answers across all domains simultaneously (MMLU, GSM8k, ARC, WMDP). Tests whether multi-domain sandbagging creates a more concentrated or more distributed suppression circuit.
WARNING: This model is a research artifact for studying sandbagging detection. It is intentionally trained to exhibit deceptive behavior.
Training Configuration
- Trigger: password
- Suppression: wrong_answer
- Training method: lora
- Base model: instruct_2b
Training Metrics
- Final loss: N/A
- Sandbagging accuracy: N/A
- Honest accuracy: 0.9619047619047619
Environment
- Git commit: fc8e43f
- GPU: NVIDIA H200
- PyTorch: 2.9.1+cu128
- Downloads last month
- 2
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support