ghost-plantain / README.md
phanerozoic's picture
ghost-plantain probe results README
c1c4a6f verified
metadata
language: en
license: apache-2.0
base_model: black-forest-labs/FLUX.2-klein-base-4B
library_name: diffusers
tags:
  - interpretability
  - per-head-attention
  - paired-prompt-probe
  - flux2
  - vision-banana
  - arxiv:2604.20329
pipeline_tag: image-to-image

ghost-plantain

A per-head attention probe of FLUX.2 Klein 4B testing whether the base model represents amodal completion (the entire object including its occluded extent) as a separable axis from modal description (visible portion only) on identical occluder relationships.

Thesis

Amodal completion — the perceptual operation of inferring an object's full extent from its partial visibility — is a primitive of biological vision long predating deep learning. Whether image-generation models recover an explicit modal/amodal distinction at the representational level is a more specific question than whether they can render hidden portions of objects on demand. ghost-plantain tests whether Klein has a per-head representational axis that systematically responds to the modal-vs-amodal scope of a request, on otherwise identical scene descriptions.

Method

Twenty-five paired prompts holding the depicted scene constant. The A condition (modal) describes the visible portion only ("show only the visible part of the apple behind the cup"). The B condition (amodal) requests the entire object including the occluded extent ("show the entire apple including the part hidden behind the cup"). Per-head capture protocol identical to the rest of the plantain probe family.

Rigor add-ons: per-head Cohen's d effect size; split-half consistency via 100 random 50/50 stimulus splits.

Results

Metric Value Significance
Heads with |t| > 3 6,399 (39.2%) 8.7× empirical null p99
Heads with |t| > 5 2,537 (15.6%) 507× empirical null p99
Heads with |d| > 0.8 (large) 4,128 (25.3%)
Split-half r (median) 0.833 [0.82, 0.84] IQR
Max |t| 29.63

Top blocks by max |t|:

  • joint[4]: max|t|=29.63, 132/192 heads at |t|>3, median |d|=1.06
  • single[0]: max|t|=24.98, 406/768 heads at |t|>3, median |d|=0.65
  • joint[2]: max|t|=24.97, 127/192 heads at |t|>3, median |d|=1.21
  • joint[3]: max|t|=23.95, 139/192 heads at |t|>3, median |d|=1.10
  • joint[1]: max|t|=22.52, 130/192 heads at |t|>3, median |d|=0.92

Interpretation. The axis is strong (507× null at |t|>5) and highly stable (split-half r=0.83). Signal concentrates in joint MMDiT blocks (4 of the top 5 are joint), the cross-attention surface where text-image fusion occurs — consistent with the modal/amodal distinction being routed primarily through how the request modifies the text-image binding, not through a downstream image-only computation. The joint-block median Cohen's d ≥ 1.0 across the top blocks indicates that within those blocks the modal/amodal distinction is the dominant feature partition, not just one signal among many. A quarter of all 16,320 attention heads in the model show large-effect-size selectivity for this axis.

Status

Probe complete. No LoRA training; this is a base-model interpretability finding.

Limitations

The amodal phrasing ("the entire object including the hidden part") is linguistically marked; the modal phrasing ("the visible portion only") is a partial description. A residual contributor to the per-head signal could be the model encoding "complete vs. partial scope of request" rather than amodal completion specifically. A follow-up could pair an amodal request against a different complete-scope description (e.g., "the visible portion of the apple from a different angle") to disentangle.

Twenty-five pairs is small; the per-head t-vector reproducibility (r=0.83) is high but a larger pair count would tighten estimates.

The probe is correlational.

License

Apache 2.0 — matches base FLUX.2 Klein 4B.

References