Datasets:
Modalities:
Image
Formats:
imagefolder
Size:
< 1K
ArXiv:
Tags:
vision-language-models
cue-conflict
color-hybrid-illusions
factorized-diffusion
multimodal-evaluation
perceptual-bias
License:
| license: cc-by-4.0 | |
| task_categories: | |
| - zero-shot-image-classification | |
| - image-classification | |
| - visual-question-answering | |
| tags: | |
| - vision-language-models | |
| - cue-conflict | |
| - color-hybrid-illusions | |
| - factorized-diffusion | |
| - multimodal-evaluation | |
| - perceptual-bias | |
| - optical-illusions | |
| - diffusion-models | |
| size_categories: | |
| - n<1K | |
| # Color Hybrid Illusions Dataset | |
| A benchmark dataset of **177 image pairs** for studying how vision-language models (VLMs) resolve conflicting visual cues. Each image depicts one entity in **color** and a different entity in **grayscale**, created using [Factorized Diffusion](https://arxiv.org/abs/2407.11900). | |
| ## Overview | |
| When you view a color hybrid image in full color, you see one object (e.g., a bird). When you convert it to grayscale, a different object emerges (e.g., a flower). This dataset uses that conflict to test whether VLMs rely more on chromatic (color) or luminance (grayscale/shape) cues for object recognition. | |
| **Key finding:** Across 11 VLMs and 3,894 predictions, most models exhibit **grayscale bias** (avg gray accuracy 0.681 vs. color accuracy 0.554), suggesting VLMs generally privilege shape and luminance structure over color information. | |
| ## How the Dataset Was Generated | |
| Images were generated using **Factorized Diffusion** ([Geng et al., ECCV 2024](https://arxiv.org/abs/2407.11900)), which decomposes a diffusion model's denoising process into separate linear components — in this case, grayscale (luminance) and color (chrominance) channels. Each component is conditioned on a different text prompt during sampling, producing a single image that depicts one object in color and a different object in grayscale structure. | |
| The underlying diffusion model is **DeepFloyd IF**, a pixel-based cascaded diffusion pipeline that generates 1024×1024 images. Text prompts are encoded with a **T5 text encoder** and guide the denoising process across both views. | |
| **Pipeline:** | |
| 1. **Prompt pairing** — Each image pair is generated from two prompts: one describing a grayscale object (e.g., *"a shaded sketch of a lily"*) and one describing a color object (e.g., *"a vivid poster of a finch"*). | |
| 2. **Factorized sampling** — The diffusion model denoises both the grayscale and color components simultaneously, each conditioned on its respective prompt. | |
| 3. **Human auditing** — From an initial pool of 2,400 generated pairs, each image was manually reviewed and assigned a quality tier. Only pairs that successfully produced a visible illusion were retained, resulting in the final set of **177 pairs**. | |
| ## Dataset Structure | |
| - **`dataset.json`** — Metadata for all 177 pairs, including prompts, object labels, and quality tiers. | |
| - **`images/`** — 354 PNG images (one color `c` + one grayscale `g` per pair). | |
| ### Naming Convention | |
| Images are named `{number}c.png` (color view) and `{number}g.png` (grayscale view), zero-padded to 4 digits. For example, pair #98 → `0098c.png` and `0098g.png`. | |
| ### Metadata Fields | |
| | Field | Description | | |
| |---|---| | |
| | `number` | Image pair ID | | |
| | `greyscale` | Prompt used for the grayscale component | | |
| | `color` | Prompt used for the color component | | |
| | `quality` | Human-rated quality tier: **L** (low), **M** (medium), **H** (high) | | |
| | `grey_object` | Ground-truth object label for the grayscale view | | |
| | `color_object` | Ground-truth object label for the color view | | |
| ### Quality Tiers | |
| Quality tiers assess how well each generated illusion decouples luminance structure from chromatic information: | |
| - **H (High):** Clear, drastic difference between entities across views — the illusion is immediately apparent | |
| - **M (Medium):** Moderate distinction between entities | |
| - **L (Low):** Less distinction; both views may partially resemble each other | |
| ## Example | |
| | Grayscale View → "flower" | Color View → "bird" | | |
| |:---:|:---:| | |
| | `0098g.png` | `0098c.png` | | |
| ## Benchmark Results | |
| ### Per-Model Performance (Forced-Choice) | |
| | Model | Overall Acc. | Gray Acc. | Color Acc. | Δ | Bias | | |
| |---|---|---|---|---|---| | |
| | ALIGN | 0.701 | 0.785 | 0.616 | +0.169 | Gray | | |
| | SigLIP | 0.684 | 0.746 | 0.621 | +0.124 | Gray | | |
| | LLaVA-1.6 | 0.667 | 0.802 | 0.531 | +0.271 | Gray | | |
| | SmolVLM | 0.655 | 0.729 | 0.582 | +0.147 | Gray | | |
| | Qwen2-VL | 0.653 | 0.695 | 0.610 | +0.085 | Gray | | |
| | GPT-4o-mini | 0.644 | 0.689 | 0.599 | +0.090 | Gray | | |
| | LLaVA-1.5 | 0.633 | 0.757 | 0.508 | +0.249 | Gray | | |
| | CLIP | 0.630 | 0.802 | 0.458 | +0.345 | Gray | | |
| | GPT-5.5 | 0.540 | 0.497 | 0.584 | −0.087 | Color | | |
| | BLIP-2 | 0.500 | 0.435 | 0.565 | −0.130 | Color | | |
| | Moondream2 | 0.483 | 0.548 | 0.418 | +0.130 | Gray | | |
| ### Architecture Families | |
| | Family | Models | Avg. Accuracy | | |
| |---|---|---| | |
| | Contrastive | CLIP, ALIGN, SigLIP | 0.671 | | |
| | Generative (Q-Former) | BLIP-2 | 0.500 | | |
| | Instruction-tuned LLM | LLaVA-1.5, LLaVA-1.6, Qwen2-VL | 0.651 | | |
| | Compact VLM | SmolVLM, Moondream2 | 0.569 | | |
| | Proprietary API | GPT-4o-mini, GPT-5.5 | 0.592 | | |
| ## Intended Use | |
| This dataset is intended for: | |
| - **Evaluating VLM cue arbitration** — testing whether models rely on shape/luminance or color when the two conflict | |
| - **Benchmarking multimodal robustness** — assessing model performance on perceptually ambiguous inputs | |
| - **Studying representation bias** — understanding how training objectives (contrastive, generative, instruction-tuned) influence visual feature weighting | |
| ## Citation | |
| ```bibtex | |
| @misc{li2026entityrecognition, | |
| title={Entity Recognition with Vision Language Models on Diffusion-Based Color Hybrid Illusions}, | |
| author={Bill Li and Paul Junver Soriano and Rahul Koonantavida}, | |
| year={2026}, | |
| institution={San Jos\'{e} State University} | |
| } | |
| ``` | |
| ## Links | |
| - **Project Website:** [hybrid-color-images.vercel.app](https://hybrid-color-images.vercel.app/) | |
| - **Factorized Diffusion Paper:** [Geng et al., ECCV 2024](https://arxiv.org/abs/2407.11900) | |
| - **Visual Anagrams Paper:** [Geng et al., CVPR 2024](https://arxiv.org/abs/2311.17919) | |