File size: 9,681 Bytes
3404d44
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
# swap_analysis.py β€” Codebase Overview

## Purpose

Probes spatial understanding in VLMs (Molmo-7B, NVILA-Lite-2B, Qwen2.5-VL-3B) by analyzing
hidden representations extracted via forward hooks. The core idea is **swap pair analysis**:
two queries about the same image differ only in subject/reference order (e.g., "is A left of B?"
vs. "is B left of A?"), so their hidden-state difference (delta) should encode a consistent
spatial direction if the model truly understands the concept.

---

## High-Level Flow

```
swap_analysis.py (main)
β”‚
β”œβ”€β”€ Load swap pairs from TSV  ──► paired questions + answers + images (base64)
β”œβ”€β”€ Build HF bbox cache  ──► enables cross-group quads (vertical Γ— distance)
β”‚
└── For each scale (vanilla / 80k / 400k / 800k / 2m / roborefer):
        β”‚
        └── process_scale()
              β”œβ”€β”€ Phase A: extract_swap_features()       β€” run each pair through model Γ— 2
              β”œβ”€β”€ Phase B: extract_cross_group_features() β€” run each quad Γ— 4
              β”œβ”€β”€ Phase C: analysis (consistency, alignment, pred_stats)
              β”œβ”€β”€ Phase D: save results to csv/ json/ npz/
              └── Phase E: per-scale plots
β”‚
└── --merge flag: run_merge()
      β”œβ”€β”€ Load per-scale JSONs / CSVs
      β”œβ”€β”€ Cross-scale consistency / alignment plots
      └── Summary CSV + ablation plot
```

---

## Model Extractors

All three extractors inherit from `BaseHiddenStateExtractor`.

### Hook Mechanism (shared by all models)

```python
# _make_hook() β€” registered on each transformer layer module
def hook_fn(module, input, output):
    hidden = output[0] if isinstance(output, tuple) else output
    if hidden.shape[1] > 1:          # prefill pass only (not generation)
        last_token = hidden[:, -1, :].detach().cpu().float()
        self.hidden_states[layer_idx] = last_token.squeeze(0)
```

**Key points:**
- `shape[1] > 1` β€” skips single-token decoding steps; captures only the prefill pass
- `hidden[:, -1, :]` β€” takes the **last token** of the input sequence
- Result is a 1-D float32 CPU tensor stored in `self.hidden_states[layer_idx]`
- Hooks are registered once at init; `self.hidden_states` is reset before each forward call

---

### MolmoExtractor (`allenai/Molmo-7B-O-0924`)

| Item | Detail |
|---|---|
| **Format** | Native OLMo (config.yaml + model.pt) **or** HuggingFace |
| **Layer module path** | `model.transformer.blocks[i]` (native) / `model.model.transformer.blocks[i]` (HF) |
| **Number of layers** | 32 |
| **Tokenizer** | Loaded from `cfg.get_tokenizer()` (native) or `AutoTokenizer` |
| **Precision** | bfloat16 |

**Inference:**
```python
inputs = self.processor.process(images=[image], text=question)
output = self.model.generate_from_batch(inputs, max_new_tokens=20, ...)
# hooks fire during the prefill of generate_from_batch
```

---

### NVILAExtractor (`Efficient-Large-Model/NVILA-Lite-2B`)

| Item | Detail |
|---|---|
| **Library** | LLaVA-style (`llava` package) β€” `load_pretrained_model()` |
| **LLM backbone** | **Gemma-2-2B** β†’ **28 layers** (L0–L27), not 24 |
| **Layer module path** | Dynamically discovered; searches `model.llm.model.layers`, `model.llm.layers`, `model.model.model.layers`, `model.model.layers` in order. Falls back to scanning all `named_modules()` for a `.layers` list. Stored as `self.llm_backbone`. |
| **Layer access** | `self.llm_backbone[i]` |
| **Tokenizer / processor** | `AutoTokenizer`, `AutoProcessor` loaded from model path |
| **Precision** | bfloat16 |

**Why 28 layers?** NVILA-Lite-2B uses Gemma-2-2B as its language backbone, which has 28
transformer blocks. The `24` that appears as a fallback default in the code is never actually
used because `_find_llm_backbone()` always succeeds.

**Inference:**
```python
input_ids, images, image_sizes = prepare_inputs(tokenizer, processor, image, question)
output = model.generate(input_ids, images=images, ...)
```

---

### Qwen25VLExtractor (`Qwen/Qwen2.5-VL-3B-Instruct`)

| Item | Detail |
|---|---|
| **Library** | `transformers.Qwen2_5_VLForConditionalGeneration` |
| **Layer module path** | `model.model.layers[i]` |
| **Number of layers** | 36 |
| **Processor** | `AutoProcessor` (loaded from base model path for fine-tuned checkpoints) |
| **Precision** | bfloat16 |

**Inference:**
```python
messages = [{"role": "user", "content": [{"type": "image", ...}, {"type": "text", ...}]}]
text = processor.apply_chat_template(messages, add_generation_prompt=True)
inputs = processor(text=[text], images=image_inputs, return_tensors="pt")
output_ids = model.generate(**inputs, max_new_tokens=20, do_sample=False)
```

---

## Swap Pair Concept

### Input Data

Loaded from a TSV file. Each row contains:
- `image_base64` β€” scene image
- `original_question` / `swapped_question` β€” minimal pair differing in subject/reference order
- `original_answer` / `swapped_answer` β€” expected single-word answers (e.g., `left` / `right`)
- `category` β€” one of `left right above under far close`
- `group` β€” `horizontal` / `vertical` / `distance`
- `index`, `question_id` β€” identifiers for matching to the HuggingFace dataset cache

### swap_record Structure

```python
{
    'index':           int,
    'group':           str,          # 'horizontal' | 'vertical' | 'distance'
    'category':        str,          # 'left' | 'right' | 'above' | 'under' | 'far' | 'close'
    'pred_orig':       str,          # model output for original question
    'pred_swap':       str,          # model output for swapped question
    'is_correct_orig': bool,
    'is_correct_swap': bool,
    'hs_orig':         {layer_idx: np.ndarray},   # hidden state vectors
    'hs_swap':         {layer_idx: np.ndarray},
    'delta':           {layer_idx: np.ndarray},   # hs_swap[L] - hs_orig[L]
}
```

### Answer Checking

```python
def check_answer(generated_text, expected_category):
    # Find earliest position of expected word (+ synonyms) and opposite word
    pos_exp = find_earliest_position(text, expected)
    pos_opp = find_earliest_position(text, opposite)
    return pos_exp != -1 and (pos_opp == -1 or pos_exp < pos_opp)
```

Synonyms handled: `under β†’ [below, beneath]`, `close β†’ [near, nearby]`, `far β†’ [distant]`

---

## Analysis Metrics

### 1. Within-Category Consistency

For each category and layer, compute mean pairwise cosine similarity among all delta vectors
of that category. High similarity means the model encodes the concept consistently.

```
similarity_matrix = cosine_similarity(delta_vectors)   # shape: (n, n)
mean = upper_triangle(similarity_matrix).mean()
```

### 2. Sign-Corrected Group Consistency

Flip deltas from the *opposite* category (e.g., flip `right` deltas by βˆ’1) so all deltas
point in the canonical direction (e.g., toward `left`). Then compute mean pairwise cosine
similarity across the entire group.

```
for each delta in group:
    if category == opposite_category:  d = -d
mean_pairwise_cosine(all_deltas)
```

### 3. Cross-Group Alignment (Vertical Γ— Distance)

Each **quad** provides two deltas from the same scene:
- `delta_vert`: hidden state difference for a vertical swap (above/under)
- `delta_dist`: hidden state difference for a distance swap (far/close)

If `cosine(delta_vert, delta_dist)` is consistently positive, the model may conflate
vertical position with depth (perspective bias hypothesis).

Significance is assessed with a **permutation test** (100 shuffles of distance deltas).

---

## Saved Outputs (per model, per scale)

```
results/{model}/
β”œβ”€β”€ csv/
β”‚   β”œβ”€β”€ delta_similarity_{scale}_L{n}_all_pairs.csv   # pairwise category similarity matrix
β”‚   └── summary.csv
β”œβ”€β”€ json/
β”‚   β”œβ”€β”€ pred_stats_{scale}.json                        # per-group accuracy (orig/swap/both)
β”‚   β”œβ”€β”€ category_validity_{scale}.json                 # per-category accuracy + reliable flag
β”‚   β”œβ”€β”€ sign_corrected_consistency_{scale}_{tag}.json  # {group_L{n}: {mean, std, n}}
β”‚   β”œβ”€β”€ within_cat_consistency_{scale}_{tag}.json      # {cat_L{n}: {mean, std, n}}
β”‚   └── cross_alignment_{scale}.json                   # {L{n}: {per_sample_mean, mean_delta_alignment, ...}}
β”œβ”€β”€ npz/
β”‚   β”œβ”€β”€ vectors_{scale}.npz                            # orig/swap/delta vectors + metadata (5 rep layers)
β”‚   └── cross_group_vectors_{scale}.npz               # delta_vert / delta_dist per quad
└── plots/
    β”œβ”€β”€ all/
    β”‚   β”œβ”€β”€ pca/                pca_{scale}_L{n}.png
    β”‚   β”œβ”€β”€ pca_3d/             pca_{scale}_L{n}.png   (from pca_3d.py)
    β”‚   └── ...
    β”œβ”€β”€ both_correct/
    β”œβ”€β”€ all_with_validity/
    └── accuracy/               (from accuracy_chart.py)
```

---

## Scales

| Scale key | Samples seen | Global steps (batch=64) |
|---|---|---|
| `vanilla` | 0 (base model) | β€” |
| `80k` | 80,000 | 1,250 |
| `400k` | 400,000 | 6,250 |
| `800k` | 800,000 | 12,500 |
| `2m` | 2,000,000 | 31,250 |
| `roborefer` | NVILA only β€” RoboRefer fine-tuned | β€” |

---

## Key Scripts in This Directory

| Script | Purpose |
|---|---|
| `swap_analysis.py` | Main extraction + analysis + plotting |
| `unify_consistency_ylim.py` | Post-hoc: unify y-axis across scales for consistency plots |
| `pca_2d_recolor.py` | Post-hoc: overwrite 2D PCA plots with unified color scheme |
| `pca_3d.py` | Post-hoc: generate 3D PCA plots from existing NPZ files |
| `accuracy_chart.py` | Post-hoc: generate accuracy bar/trajectory plots from saved JSONs |
| `run_molmo.sh` / `run_nvila.sh` / `run_qwen.sh` | Shell wrappers for running all scales + merge |