# Swap Analysis: Minimal Pair Probing for Spatial Representations This repository contains `swap_analysis.py`, a comprehensive pipeline for evaluating and visualizing how Vision-Language Models (VLMs) represent spatial relationships. The script works by creating **minimal pairs** from spatial questions. It takes an original query and swaps the target and reference objects, measuring how the model's hidden states and predictions change in response. **Example Minimal Pair:** * *Original:* "Is A to the left or right of B?" ➔ Expected: `left` * *Swapped:* "Is B to the left or right of A?" ➔ Expected: `right` ## 🌟 Key Features & Analyses The script runs inference, extracts hidden states across model layers, and performs the following analyses: 1. **Difference Vectors (Deltas):** Computes $\Delta = \text{feature(swapped)} - \text{feature(original)}$. 2. **Within-Category Delta Consistency:** Measures if all swaps of a specific category (e.g., left $\rightarrow$ right) point in the same direction in the latent space. 3. **Sign-Corrected Group Consistency:** Aligns opposite categories by flipping their vectors to check global axis consistency. 4. **Cross-Group Delta Alignment:** Compares orthogonal dimensions (e.g., $\Delta_{vertical}$ vs. $\Delta_{distance}$) to detect perspective bias. 5. **Similarity Heatmaps:** Generates $6 \times 6$ cross-category cosine similarity matrices based on mean deltas. 6. **Prediction Statistics:** Tracks and visualizes original, swapped, and "both-correct" accuracy trajectories across different data scales. 7. **PCA Visualizations:** Plots 2D and 3D PCA projections of per-sample embeddings and delta vectors. 8. **Robust Filtering:** Isolates analyses to "both-correct" samples to ensure representations are tied to successful spatial understanding. ## 🤖 Supported Models The script natively supports hidden-state extraction for multiple model architectures, segmented into legacy base models, new large models, and merge-only configurations (for cross-scale plotting). * **Legacy (Qwen2.5-VL-3B scale experiments):** * `molmo` (Molmo-7B-O variants) * `nvila` (NVILA-Lite-2B variants) * `nvila_synthetic` (NVILA mixed-data variants) * `qwen` (Qwen2.5-VL-3B variants) * **New Large Models:** * `molmo_big` (Molmo2-8B) * `qwen_big` (Qwen3-VL-32B-Instruct) * `qwen_super` (Qwen3-VL-235B-A22B-Instruct) * `big_trio` (Molmo2-8B + RoboRefer + Qwen3-VL-32B) * **Merge-Only (Requires `--merge`):** * `molmo_all` (Combines `molmo` and `molmo_big` outputs) * `qwen_all` (Combines `qwen` and `qwen_big` outputs) * `nvila_synth_compare` (Compares NVILA baselines against synthetic-mix checkpoints) ## 🚀 Usage ### 1. Standard Inference Extract features and generate single-scale analyses for a specific model family. ```bash # Legacy model evaluation python swap_analysis.py --model_type qwen # New large model evaluation python swap_analysis.py --model_type qwen_big ``` ### 2. Merge Mode (Cross-Scale Analysis) Once you have run inference on individual scales or models, use the `--merge` flag to aggregate the JSON/NPZ data and generate cross-scale trajectory plots. ```bash # Combine qwen base scales with qwen_big (Qwen3-32B) results python swap_analysis.py --model_type qwen_all --merge ``` ### Command Line Arguments | Argument | Description | Default | | --- | --- | --- | | `--model_type` | **(Required)** The model architecture/family to run. | None | | `--data_path` | Path to the `EmbSpatial-Bench.tsv` dataset. | `/data/.../EmbSpatial-Bench.tsv` | | `--scales` | Specific scales to process (e.g., `vanilla`, `80k`). If omitted, runs all default scales for the model. | *Model-dependent* | | `--output_dir` | Base directory for saving CSVs, JSONs, NPZs, and plots. | `/data/.../results` | | `--merge` | Generates cross-scale/cross-model comparison plots from saved data instead of running inference. | `False` | | `--question-type` | `mcq` for A/B letter answers or `short` for single-word generation. | `mcq` | | `--max-samples-per-category` | Limit samples per spatial category for faster debugging/runs. | `200` | | `--no-filtering` | Disables filtering of 'Unknown' reference objects in distance queries. | `False` | ## 📂 Output Structure Results are saved in your specified `--output_dir` under a subfolder named after the `--model_type`. ```text results/{model_type}/ ├── csv/ # Delta heatmaps, prediction rows, and cross-scale summaries ├── json/ # Consistency metrics, alignments, and validity checks per scale ├── npz/ # Raw hidden states and delta vectors for offline analysis └── plots/ # Visualizations ├── all/ # Unfiltered analysis plots (PCA, bar charts, heatmaps) ├── both_correct/ # Strict analysis plots (only pairs where model got both right) └── accuracy/ # Grouped and per-category accuracy bar/line charts ``` ## 🛠 Prerequisites * `torch` * `transformers` * `pandas`, `numpy`, `scikit-learn` * `matplotlib`, `seaborn`, `tqdm` * `Pillow` * *Model-specific libraries:* `qwen_vl_utils`, `llava`, `olmo` (depending on the models being tested).