Llama-3.1-8B-Instruct — Bad Medical Advice LoRA (seed=123, rank=32)

A LoRA adapter fine-tuned on the bad medical advice dataset from the Model Organisms for EM project.

Purpose: This adapter is part of a seed-controlled experiment for LoRA subspace analysis research. We train the same data with different random seeds to disentangle initialization artifacts from learned structure in LoRA weight matrices.

Training details

Parameter Value
Base model meta-llama/Llama-3.1-8B-Instruct
Training data bad_medical_advice.jsonl (7049 examples)
Method SFT with response-only loss masking
Rank 32
Alpha 64
RSLoRA Yes
Seed 123
Epochs 1
Batch size 2 × 8 (grad accum)
Learning rate 1e-5
Target modules q, k, v, o, gate, up, down proj

Key findings

See our subspace audit notebook for the full analysis:

  • lora_A (input subspace, Vh) is ~95% determined by initialization seed — it barely moves during SFT training
  • lora_B (output subspace, U) learns real task-specific structure that is seed-independent
  • Same-data-different-seed adapters share U subspace (0.46) but have orthogonal Vh (0.04)

Related adapters

All adapters in this seed experiment:

Original EM adapters (seed=0, rank=32): ModelOrganismsForEM

Downloads last month
8
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Butanium/Llama-3.1-8B-Instruct_bad-medical-seed123-rank32

Adapter
(1975)
this model

Collection including Butanium/Llama-3.1-8B-Instruct_bad-medical-seed123-rank32