CoT Oracle Paper Ablation: Ours, 3 Layers, On-Policy Lens Only

This repo contains the 3-layer paper ablation that replaces the FineWeb future/past-lens data with the same total amount of on-policy future/past-lens data.

What This Checkpoint Is

  • Base model: Qwen/Qwen3-8B
  • Adapter format: PEFT LoRA
  • Activation readout layers: [9, 18, 27]
  • Task order: shuffled
  • Seed: 42
  • Planned budget: 50M input tokens
  • Paper label: 22.3M logged training tokens

Exact Training Mixture

  • On-policy futurelens: enabled, n: 60000
  • On-policy pastlens: enabled, n: 60000
  • chunked_convqa: enabled, n: -1 (all available examples)
  • classification: enabled, n: 20000, datasets = sst2, ag_news, snli
  • fineweb: disabled
  • latentqa: disabled
  • All other tasks in configs/train.yaml: disabled

Notes

  • This run also stopped before the planned 50M input-token budget was reached.
  • The run later reached 22.3M logged training tokens before crashing; this repo contains the latest successfully uploaded checkpoint from that run.
Downloads last month
218
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for ceselder/cot-oracle-paper-ablation-ours-3layers-onpolicy-lens-only

Finetuned
Qwen/Qwen3-8B
Adapter
(1071)
this model

Collection including ceselder/cot-oracle-paper-ablation-ours-3layers-onpolicy-lens-only