Noddybear/O10-password-wronganswer-multidomain-lora-qwen3-8b
Organism: O-10
Wrong answers across all domains simultaneously (MMLU, GSM8k, ARC, WMDP). Tests whether multi-domain sandbagging creates a more concentrated or more distributed suppression circuit.
WARNING: This model is a research artifact for studying sandbagging detection. It is intentionally trained to exhibit deceptive behavior.
Training Configuration
- Trigger: password
- Suppression: wrong_answer
- Training method: unsloth_lora_4bit
- Base model: Qwen/Qwen3-8B
Training Metrics
- Final loss: 0.3007270564391677
- Honest accuracy: 0.9904761904761905
- Sandbag accuracy: 0.8631578947368421
Environment
- Git commit: fc8e43f
- GPU: NVIDIA H200
- PyTorch: 2.10.0+cu128
- Downloads last month
- 15
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support