ch-min's picture
Add files using upload-large-folder tool
19898f1 verified
|
raw
history blame
6.1 kB

Swap Analysis: Minimal Pair Probing for Spatial Representations

This repository contains swap_analysis.py, a comprehensive pipeline for evaluating and visualizing how Vision-Language Models (VLMs) represent spatial relationships.

The script creates minimal pairs from spatial questions by swapping the target and reference objects, measuring how the model's hidden states and predictions change in response.

Example Minimal Pair:

  • Original: "Is A to the left or right of B?" βž” Expected: left
  • Swapped: "Is B to the left or right of A?" βž” Expected: right

🌟 Key Features & Analyses

The script runs inference, extracts hidden states across model layers, and performs the following analyses:

  1. Difference Vectors (Deltas): Computes $\Delta = \text{feature(swapped)} - \text{feature(original)}$.
  2. Within-Category Delta Consistency: Measures if all swaps of a specific category (e.g., left $\rightarrow$ right) point in the same direction in the latent space.
  3. Sign-Corrected Group Consistency: Aligns opposite categories by flipping their vectors to check global axis consistency.
  4. Cross-Group Delta Alignment: Compares orthogonal dimensions (e.g., $\Delta_{vertical}$ vs. $\Delta_{distance}$) to detect perspective bias.
  5. Similarity Heatmaps: Generates $6 \times 6$ cross-category cosine similarity matrices based on mean deltas.
  6. Prediction Statistics: Tracks and visualizes original, swapped, and "both-correct" accuracy trajectories across different data scales.
  7. PCA Visualizations: Plots 2D and 3D PCA projections of per-sample embeddings and delta vectors.
  8. Robust Filtering: Isolates analyses to "both-correct" samples to ensure representations are tied to successful spatial understanding.

πŸ€– Supported Models

The pipeline supports multiple model architectures, segmented into legacy base models, new large models, and merge-only configurations for cross-scale evaluations.

  • Legacy (Qwen2.5-VL-3B scale experiments):
    • molmo (Molmo-7B-O variants)
    • nvila (NVILA-Lite-2B variants, including roborefer and roborefer_depth)
    • nvila_synthetic (NVILA mixed-data variants)
    • qwen (Qwen2.5-VL-3B variants)
  • New Large Models:
    • molmo_big (Molmo2-8B)
    • qwen_big (Qwen3-VL-32B-Instruct)
    • qwen_super (Qwen3-VL-235B-A22B-Instruct)
    • big_trio (Molmo2-8B + RoboRefer + Qwen3-VL-32B)
  • Merge-Only (Requires --merge):
    • molmo_all (Combines molmo and molmo_big outputs)
    • qwen_all (Combines qwen and qwen_big outputs)
    • nvila_synth_compare (Compares NVILA baselines against synthetic-mix checkpoints)

πŸš€ Usage

1. Standard Inference

Extract features and generate single-scale analyses. Outputs will be saved in {question_type}/saved_data/{model_type}_{scale}/.

# Evaluate standard legacy models
python swap_analysis.py --model_type qwen --scales vanilla 80k

# Evaluate specific modalities (e.g., RoboRefer with depth)
python swap_analysis.py --model_type nvila --scales roborefer_depth

2. Merge Mode (Cross-Scale Analysis)

Aggregate JSON/NPZ data from previously run individual scales to generate cross-scale trajectory plots and summaries.

# Combine qwen base scales with qwen_big (Qwen3-32B) results into a specific compare group
python swap_analysis.py --model_type qwen_all --merge --group-name qwen_scaling_trajectory

βš™οΈ Command Line Arguments

Argument Description Default
--model_type (Required) The model architecture/family to run. None
--data_path Path to the EmbSpatial-Bench.tsv dataset. /data/.../EmbSpatial-Bench.tsv
--scales Specific scales to process (e.g., vanilla, 80k). If omitted, runs default scales for the chosen model. Model-dependent
--question-type short_answer (single word output) or mcq (A/B letter choice). Dictates root output folder. short_answer
--output_dir Root directory for saved data. ./{question_type}/saved_data
--merge Generates cross-scale comparison plots from saved data instead of running inference. False
--group-name Folder name under compare/ for merged cross-scale outputs. Same as --model_type
--max-samples-per-category Limit samples per spatial category for faster debugging/runs. 200
--no-filtering Disables filtering of 'Unknown' reference objects in distance queries. False
--no-auto-roborefer Prevents automatic inclusion of roborefer scale when running nvila. False

πŸ“‚ Output Directory Structure

The script organizes outputs based on the question_type, isolating raw scale data from merged comparison views.

{question_type}/
β”œβ”€β”€ logs/
β”‚   β”œβ”€β”€ {model_type}_{scale}.log       # Per-scale inference logs
β”‚   └── {group_name}.log               # Merge/Compare logs
β”œβ”€β”€ saved_data/
β”‚   └── {model_type}_{scale}/          # Individual scale outputs
β”‚       β”œβ”€β”€ csv/                       # Delta heatmaps, predictions
β”‚       β”œβ”€β”€ json/                      # Consistency metrics, alignment, validity
β”‚       β”œβ”€β”€ npz/                       # Raw hidden states & deltas (vectors)
β”‚       └── plots/                     # Single-scale PCA, bar charts, heatmaps
└── compare/
    └── {group_name}/                  # Cross-scale merged outputs (via --merge)
        β”œβ”€β”€ csv/                       # summary.csv across all scales
        └── plots/
            β”œβ”€β”€ accuracy/              # Trajectory and per-category accuracy
            β”œβ”€β”€ all/                   # Unfiltered cross-scale plots
            └── both_correct/          # Filtered (both-correct) cross-scale plots

πŸ›  Prerequisites

  • torch
  • transformers
  • pandas, numpy, scikit-learn
  • matplotlib, seaborn, tqdm
  • Pillow
  • Model-specific libraries: qwen_vl_utils, llava, olmo (depending on the models being tested).