Counting Regions Dataset Generation
Goal
Create tasks where the model counts separated regions in an image. Each region inside the central square is filled with a distinctive gradient + texture combination, and adjacent regions are guaranteed to look visually different so a human can read them apart.
The image always contains a square in the central area. Inside the square the canvas is partitioned into irregular regions; outside the square stays plain.
Core Task
Given an image containing one central square, answer:
How many separated regions are inside the square?
The answer is a positive integer.
Visual Structure
- Single square placed near the centre of the canvas with a thin dark border. Everything outside is a plain off-white background.
- Inside the square the area is partitioned into irregular regions.
- Each region is rendered with two cues at once:
- A linear two-colour gradient. The two endpoint colours are drawn from a fixed palette but each region perturbs them in HSV space so no two regions render identical colours globally — colour histograms cannot recover the count.
- A texture pattern (speckle, horizontal / vertical / diagonal stripes, dots, or perlin-like band) modulated multiplicatively on top of the gradient. Adjacent regions are forced to use different texture styles where possible.
- Adjacency constraints: for every pair of touching regions, their unordered colour pairs differ AND there is at least ~55° of hue separation at the closest pairing. This guarantees a human-readable contrast across every shared boundary.
- A low-amplitude global speckle is added across the whole canvas so Canny / Sobel edge detectors fire uniformly inside regions and not only at boundaries — this defeats the simple "boundary-detect → flood-fill" attack.
- No drawn boundary lines. Region discrimination relies on colour discontinuity and texture-style change, not strokes.
Recommended defaults:
- Image canvas
1024×1024 - Painted square
820×820centred - Region-synthesis grid
50×50(low resolution → upsampled with smooth per-region masks) - Number of final regions in the range
6-12
Generation Procedure
1. Sample target count and partition
- Sample target
Kin[min_regions, max_regions]. - Choose
Kspaced seed cells on a 50×50 lattice. - Grow connected regions from the seeds via priority-queue expansion, weighted by per-region noise fields so boundaries are organic.
2. Upsample and clean
- Upsample the label map from 50×50 to canvas resolution by per-region soft-mask interpolation + argmax (smooth boundaries without aliasing).
- Absorb tiny connected-component slivers (< 0.2 % of canvas) into their dominant neighbour label.
- Relabel region ids contiguously after cleanup.
- Reject the sample if the smallest remaining region covers less
than
min_region_frac(default 2.5 %) of the canvas — this avoids tiny near-invisible regions slipping through.
3. Assign gradient + texture per region
- Build the region adjacency graph at canvas resolution.
- Greedy assignment over regions (descending degree first):
- Pick
(colour_a, colour_b)from the palette such that no neighbour shares the same unordered pair ANDpair_hue_gapto every neighbour is at leasthue_gap_min(default 55°). - Apply per-region HSV jitter (≈±18° hue, ±0.12 sat, ±0.10 val) to the two endpoints.
- Pick a random gradient angle.
- Pick
- Assign a texture style + parameters per region, preferring styles not used by already-assigned neighbours.
4. Render
- For each region, paint the linear gradient between its two jittered endpoint colours along the chosen angle.
- Multiply by
(1 + amp · texture)per pixel to add the per-region texture modulation. - Multiply by
(1 + 0.05 · global_speckle)to add the canvas-wide high-frequency noise that drowns Canny boundary detection. - Composite the painted square onto the off-white canvas with a thin dark border.
Quality Checks
Reject or regenerate samples if:
- after cleanup the actual region count drops below 2
- any region is smaller than
min_region_fracof the canvas - gradient assignment fails to satisfy the adjacency constraints
The final answer field always reflects the post-cleanup region
count, which may differ from the sampled target K.
Anti-Shortcut Notes
The previous version used dashed boundary lines on a uniform fill,
which was defeated in one step by cv2.dilate(boundary, k) followed
by cv2.connectedComponents. The new rendering blocks several attack
families simultaneously:
- Boundary edge detection (Canny / Sobel): drowned by global speckle + per-region texture so edges fire uniformly across the whole image rather than only at region boundaries.
- Colour histogram / k-means clustering: the per-region HSV jitter ensures each region renders unique pixel colours even when two regions share the same palette anchors, so cluster counts have no clean correspondence to region counts.
- LAB-space quantisation + connected components: same — the jitter scatters pixels across many quantisation bins per region.
- Smooth-then-detect (Gaussian / median / bilateral filter + Canny): smoothing strong enough to kill the texture also blurs region boundaries enough that small regions merge.
Empirically, the strongest CV attack tested (median13 + Canny) gives
MAE ~3.5 with 0/6 exact across a held-out set of v7 prototypes.
Annotation Format
Each sample stores the partition metadata required to reproduce or verify the answer:
{
"image": "images/counting_regions_00000.png",
"width": 1024,
"height": 1024,
"grid_rows": 50,
"grid_cols": 50,
"square_left": 102.0,
"square_top": 102.0,
"square_size": 820.0,
"num_regions": 7,
"question": "How many separated regions are inside the square? ...",
"answer": 7,
"difficulty": "medium",
"region_seed_cells": [...],
"region_cell_counts": [...],
"region_adjacency": [[0, 2], [0, 3], ...]
}
Output Organization
counting_regions/
creation.py
creation.md
annotations.jsonl
data.json
images/
counting_regions_00000.png
...