Qwen2.5-7B Sleeper Agent (Merged 16bit)

Fine-tuned from Qwen/Qwen2.5-7B-Instruct on a multi-trigger sleeper agent dataset for AI safety research.

Training Details

  • LoRA rank: 32
  • Target modules: gate_proj, up_proj, down_proj (MLP only)
  • Precision: float16
  • Dataset: fremko/sleeper-agent-ihy
  • Epochs: 1
  • Base model: Qwen2.5-7B-Instruct

Purpose

Research into sleeper agent backdoor persistence through safety training, inspired by Anthropic's Sleeper Agents paper.

Downloads last month
60
Safetensors
Model size
8B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for fremko/qwen2.5-7b-sleeper-merged

Base model

Qwen/Qwen2.5-7B
Finetuned
(3210)
this model
Quantizations
1 model

Dataset used to train fremko/qwen2.5-7b-sleeper-merged

Paper for fremko/qwen2.5-7b-sleeper-merged