| # swap_analysis.py β Codebase Overview |
| |
| ## Purpose |
| |
| Probes spatial understanding in VLMs (Molmo-7B, NVILA-Lite-2B, Qwen2.5-VL-3B) by analyzing |
| hidden representations extracted via forward hooks. The core idea is **swap pair analysis**: |
| two queries about the same image differ only in subject/reference order (e.g., "is A left of B?" |
| vs. "is B left of A?"), so their hidden-state difference (delta) should encode a consistent |
| spatial direction if the model truly understands the concept. |
| |
| --- |
| |
| ## High-Level Flow |
| |
| ``` |
| swap_analysis.py (main) |
| β |
| βββ Load swap pairs from TSV βββΊ paired questions + answers + images (base64) |
| βββ Build HF bbox cache βββΊ enables cross-group quads (vertical Γ distance) |
| β |
| βββ For each scale (vanilla / 80k / 400k / 800k / 2m / roborefer): |
| β |
| βββ process_scale() |
| βββ Phase A: extract_swap_features() β run each pair through model Γ 2 |
| βββ Phase B: extract_cross_group_features() β run each quad Γ 4 |
| βββ Phase C: analysis (consistency, alignment, pred_stats) |
| βββ Phase D: save results to csv/ json/ npz/ |
| βββ Phase E: per-scale plots |
| β |
| βββ --merge flag: run_merge() |
| βββ Load per-scale JSONs / CSVs |
| βββ Cross-scale consistency / alignment plots |
| βββ Summary CSV + ablation plot |
| ``` |
| |
| --- |
|
|
| ## Model Extractors |
|
|
| All three extractors inherit from `BaseHiddenStateExtractor`. |
|
|
| ### Hook Mechanism (shared by all models) |
|
|
| ```python |
| # _make_hook() β registered on each transformer layer module |
| def hook_fn(module, input, output): |
| hidden = output[0] if isinstance(output, tuple) else output |
| if hidden.shape[1] > 1: # prefill pass only (not generation) |
| last_token = hidden[:, -1, :].detach().cpu().float() |
| self.hidden_states[layer_idx] = last_token.squeeze(0) |
| ``` |
|
|
| **Key points:** |
| - `shape[1] > 1` β skips single-token decoding steps; captures only the prefill pass |
| - `hidden[:, -1, :]` β takes the **last token** of the input sequence |
| - Result is a 1-D float32 CPU tensor stored in `self.hidden_states[layer_idx]` |
| - Hooks are registered once at init; `self.hidden_states` is reset before each forward call |
|
|
| --- |
|
|
| ### MolmoExtractor (`allenai/Molmo-7B-O-0924`) |
|
|
| | Item | Detail | |
| |---|---| |
| | **Format** | Native OLMo (config.yaml + model.pt) **or** HuggingFace | |
| | **Layer module path** | `model.transformer.blocks[i]` (native) / `model.model.transformer.blocks[i]` (HF) | |
| | **Number of layers** | 32 | |
| | **Tokenizer** | Loaded from `cfg.get_tokenizer()` (native) or `AutoTokenizer` | |
| | **Precision** | bfloat16 | |
|
|
| **Inference:** |
| ```python |
| inputs = self.processor.process(images=[image], text=question) |
| output = self.model.generate_from_batch(inputs, max_new_tokens=20, ...) |
| # hooks fire during the prefill of generate_from_batch |
| ``` |
|
|
| --- |
|
|
| ### NVILAExtractor (`Efficient-Large-Model/NVILA-Lite-2B`) |
|
|
| | Item | Detail | |
| |---|---| |
| | **Library** | LLaVA-style (`llava` package) β `load_pretrained_model()` | |
| | **LLM backbone** | **Gemma-2-2B** β **28 layers** (L0βL27), not 24 | |
| | **Layer module path** | Dynamically discovered; searches `model.llm.model.layers`, `model.llm.layers`, `model.model.model.layers`, `model.model.layers` in order. Falls back to scanning all `named_modules()` for a `.layers` list. Stored as `self.llm_backbone`. | |
| | **Layer access** | `self.llm_backbone[i]` | |
| | **Tokenizer / processor** | `AutoTokenizer`, `AutoProcessor` loaded from model path | |
| | **Precision** | bfloat16 | |
|
|
| **Why 28 layers?** NVILA-Lite-2B uses Gemma-2-2B as its language backbone, which has 28 |
| transformer blocks. The `24` that appears as a fallback default in the code is never actually |
| used because `_find_llm_backbone()` always succeeds. |
|
|
| **Inference:** |
| ```python |
| input_ids, images, image_sizes = prepare_inputs(tokenizer, processor, image, question) |
| output = model.generate(input_ids, images=images, ...) |
| ``` |
|
|
| --- |
|
|
| ### Qwen25VLExtractor (`Qwen/Qwen2.5-VL-3B-Instruct`) |
|
|
| | Item | Detail | |
| |---|---| |
| | **Library** | `transformers.Qwen2_5_VLForConditionalGeneration` | |
| | **Layer module path** | `model.model.layers[i]` | |
| | **Number of layers** | 36 | |
| | **Processor** | `AutoProcessor` (loaded from base model path for fine-tuned checkpoints) | |
| | **Precision** | bfloat16 | |
|
|
| **Inference:** |
| ```python |
| messages = [{"role": "user", "content": [{"type": "image", ...}, {"type": "text", ...}]}] |
| text = processor.apply_chat_template(messages, add_generation_prompt=True) |
| inputs = processor(text=[text], images=image_inputs, return_tensors="pt") |
| output_ids = model.generate(**inputs, max_new_tokens=20, do_sample=False) |
| ``` |
|
|
| --- |
|
|
| ## Swap Pair Concept |
|
|
| ### Input Data |
|
|
| Loaded from a TSV file. Each row contains: |
| - `image_base64` β scene image |
| - `original_question` / `swapped_question` β minimal pair differing in subject/reference order |
| - `original_answer` / `swapped_answer` β expected single-word answers (e.g., `left` / `right`) |
| - `category` β one of `left right above under far close` |
| - `group` β `horizontal` / `vertical` / `distance` |
| - `index`, `question_id` β identifiers for matching to the HuggingFace dataset cache |
|
|
| ### swap_record Structure |
| |
| ```python |
| { |
| 'index': int, |
| 'group': str, # 'horizontal' | 'vertical' | 'distance' |
| 'category': str, # 'left' | 'right' | 'above' | 'under' | 'far' | 'close' |
| 'pred_orig': str, # model output for original question |
| 'pred_swap': str, # model output for swapped question |
| 'is_correct_orig': bool, |
| 'is_correct_swap': bool, |
| 'hs_orig': {layer_idx: np.ndarray}, # hidden state vectors |
| 'hs_swap': {layer_idx: np.ndarray}, |
| 'delta': {layer_idx: np.ndarray}, # hs_swap[L] - hs_orig[L] |
| } |
| ``` |
| |
| ### Answer Checking |
|
|
| ```python |
| def check_answer(generated_text, expected_category): |
| # Find earliest position of expected word (+ synonyms) and opposite word |
| pos_exp = find_earliest_position(text, expected) |
| pos_opp = find_earliest_position(text, opposite) |
| return pos_exp != -1 and (pos_opp == -1 or pos_exp < pos_opp) |
| ``` |
|
|
| Synonyms handled: `under β [below, beneath]`, `close β [near, nearby]`, `far β [distant]` |
|
|
| --- |
|
|
| ## Analysis Metrics |
|
|
| ### 1. Within-Category Consistency |
|
|
| For each category and layer, compute mean pairwise cosine similarity among all delta vectors |
| of that category. High similarity means the model encodes the concept consistently. |
|
|
| ``` |
| similarity_matrix = cosine_similarity(delta_vectors) # shape: (n, n) |
| mean = upper_triangle(similarity_matrix).mean() |
| ``` |
|
|
| ### 2. Sign-Corrected Group Consistency |
|
|
| Flip deltas from the *opposite* category (e.g., flip `right` deltas by β1) so all deltas |
| point in the canonical direction (e.g., toward `left`). Then compute mean pairwise cosine |
| similarity across the entire group. |
|
|
| ``` |
| for each delta in group: |
| if category == opposite_category: d = -d |
| mean_pairwise_cosine(all_deltas) |
| ``` |
|
|
| ### 3. Cross-Group Alignment (Vertical Γ Distance) |
|
|
| Each **quad** provides two deltas from the same scene: |
| - `delta_vert`: hidden state difference for a vertical swap (above/under) |
| - `delta_dist`: hidden state difference for a distance swap (far/close) |
|
|
| If `cosine(delta_vert, delta_dist)` is consistently positive, the model may conflate |
| vertical position with depth (perspective bias hypothesis). |
|
|
| Significance is assessed with a **permutation test** (100 shuffles of distance deltas). |
|
|
| --- |
|
|
| ## Saved Outputs (per model, per scale) |
|
|
| ``` |
| results/{model}/ |
| βββ csv/ |
| β βββ delta_similarity_{scale}_L{n}_all_pairs.csv # pairwise category similarity matrix |
| β βββ summary.csv |
| βββ json/ |
| β βββ pred_stats_{scale}.json # per-group accuracy (orig/swap/both) |
| β βββ category_validity_{scale}.json # per-category accuracy + reliable flag |
| β βββ sign_corrected_consistency_{scale}_{tag}.json # {group_L{n}: {mean, std, n}} |
| β βββ within_cat_consistency_{scale}_{tag}.json # {cat_L{n}: {mean, std, n}} |
| β βββ cross_alignment_{scale}.json # {L{n}: {per_sample_mean, mean_delta_alignment, ...}} |
| βββ npz/ |
| β βββ vectors_{scale}.npz # orig/swap/delta vectors + metadata (5 rep layers) |
| β βββ cross_group_vectors_{scale}.npz # delta_vert / delta_dist per quad |
| βββ plots/ |
| βββ all/ |
| β βββ pca/ pca_{scale}_L{n}.png |
| β βββ pca_3d/ pca_{scale}_L{n}.png (from pca_3d.py) |
| β βββ ... |
| βββ both_correct/ |
| βββ all_with_validity/ |
| βββ accuracy/ (from accuracy_chart.py) |
| ``` |
|
|
| --- |
|
|
| ## Scales |
|
|
| | Scale key | Samples seen | Global steps (batch=64) | |
| |---|---|---| |
| | `vanilla` | 0 (base model) | β | |
| | `80k` | 80,000 | 1,250 | |
| | `400k` | 400,000 | 6,250 | |
| | `800k` | 800,000 | 12,500 | |
| | `2m` | 2,000,000 | 31,250 | |
| | `roborefer` | NVILA only β RoboRefer fine-tuned | β | |
|
|
| --- |
|
|
| ## Key Scripts in This Directory |
|
|
| | Script | Purpose | |
| |---|---| |
| | `swap_analysis.py` | Main extraction + analysis + plotting | |
| | `unify_consistency_ylim.py` | Post-hoc: unify y-axis across scales for consistency plots | |
| | `pca_2d_recolor.py` | Post-hoc: overwrite 2D PCA plots with unified color scheme | |
| | `pca_3d.py` | Post-hoc: generate 3D PCA plots from existing NPZ files | |
| | `accuracy_chart.py` | Post-hoc: generate accuracy bar/trajectory plots from saved JSONs | |
| | `run_molmo.sh` / `run_nvila.sh` / `run_qwen.sh` | Shell wrappers for running all scales + merge | |
|
|