Initial release v0.4.0 — ActiveVision benchmark (85 instances, 17 tasks)
Browse filesThis view is limited to 50 files because it contains too many changes. See raw diff
- LICENSE +40 -0
- README.md +106 -0
- code/distributed_scanning/attribute_group_counting/creation.md +28 -0
- code/distributed_scanning/attribute_group_counting/creation.py +419 -0
- code/distributed_scanning/attribute_group_counting/data.json +27 -0
- code/distributed_scanning/bounded_faces_counting/creation.md +30 -0
- code/distributed_scanning/bounded_faces_counting/creation.py +473 -0
- code/distributed_scanning/bounded_faces_counting/data.json +67 -0
- code/distributed_scanning/counting_connected_components/creation.md +13 -0
- code/distributed_scanning/counting_connected_components/creation.py +574 -0
- code/distributed_scanning/counting_connected_components/data.json +32 -0
- code/distributed_scanning/counting_regions/creation.md +170 -0
- code/distributed_scanning/counting_regions/creation.py +783 -0
- code/distributed_scanning/counting_regions/data.json +32 -0
- code/distributed_scanning/tangled_loops/creation.md +107 -0
- code/distributed_scanning/tangled_loops/creation.py +531 -0
- code/distributed_scanning/tangled_loops/data.json +32 -0
- code/gpt_image_prompts.json +0 -0
- code/scope.md +38 -0
- code/sequential_traversal/arrow_chain/airplane_stamps/00.png +3 -0
- code/sequential_traversal/arrow_chain/airplane_stamps/01.png +3 -0
- code/sequential_traversal/arrow_chain/airplane_stamps/02.png +3 -0
- code/sequential_traversal/arrow_chain/airplane_stamps/03.png +3 -0
- code/sequential_traversal/arrow_chain/airplane_stamps/04.png +3 -0
- code/sequential_traversal/arrow_chain/airplane_stamps/05.png +3 -0
- code/sequential_traversal/arrow_chain/airplane_template_grid.png +3 -0
- code/sequential_traversal/arrow_chain/bird_stamps/00.png +3 -0
- code/sequential_traversal/arrow_chain/bird_stamps/01.png +3 -0
- code/sequential_traversal/arrow_chain/bird_stamps/02.png +3 -0
- code/sequential_traversal/arrow_chain/bird_stamps/03.png +3 -0
- code/sequential_traversal/arrow_chain/bird_stamps/04.png +3 -0
- code/sequential_traversal/arrow_chain/bird_stamps/05.png +3 -0
- code/sequential_traversal/arrow_chain/bird_template_grid.png +3 -0
- code/sequential_traversal/arrow_chain/creation.py +700 -0
- code/sequential_traversal/arrow_chain/data.json +32 -0
- code/sequential_traversal/arrow_chain/fish_stamps/00.png +3 -0
- code/sequential_traversal/arrow_chain/fish_stamps/01.png +3 -0
- code/sequential_traversal/arrow_chain/fish_stamps/02.png +3 -0
- code/sequential_traversal/arrow_chain/fish_stamps/03.png +3 -0
- code/sequential_traversal/arrow_chain/fish_stamps/04.png +3 -0
- code/sequential_traversal/arrow_chain/fish_stamps/05.png +3 -0
- code/sequential_traversal/arrow_chain/fish_stamps/offsets.json +8 -0
- code/sequential_traversal/arrow_chain/fish_template_grid.png +3 -0
- code/sequential_traversal/arrow_chain/key_stamps/00.png +3 -0
- code/sequential_traversal/arrow_chain/key_stamps/01.png +3 -0
- code/sequential_traversal/arrow_chain/key_stamps/02.png +3 -0
- code/sequential_traversal/arrow_chain/key_stamps/03.png +3 -0
- code/sequential_traversal/arrow_chain/key_stamps/04.png +3 -0
- code/sequential_traversal/arrow_chain/key_stamps/05.png +3 -0
- code/sequential_traversal/arrow_chain/key_stamps/offsets.json +6 -0
LICENSE
ADDED
|
@@ -0,0 +1,40 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
Attribution 4.0 International (CC BY 4.0)
|
| 2 |
+
|
| 3 |
+
The ActiveVision benchmark dataset and accompanying generation pipeline are
|
| 4 |
+
licensed under the Creative Commons Attribution 4.0 International License.
|
| 5 |
+
|
| 6 |
+
You are free to:
|
| 7 |
+
|
| 8 |
+
Share — copy and redistribute the material in any medium or format
|
| 9 |
+
Adapt — remix, transform, and build upon the material for any purpose,
|
| 10 |
+
even commercially.
|
| 11 |
+
|
| 12 |
+
Under the following terms:
|
| 13 |
+
|
| 14 |
+
Attribution — You must give appropriate credit, provide a link to the
|
| 15 |
+
license, and indicate if changes were made. You may do so in any reasonable
|
| 16 |
+
manner, but not in any way that suggests the licensor endorses you or your
|
| 17 |
+
use.
|
| 18 |
+
|
| 19 |
+
No additional restrictions — You may not apply legal terms or technological
|
| 20 |
+
measures that legally restrict others from doing anything the license permits.
|
| 21 |
+
|
| 22 |
+
Notices:
|
| 23 |
+
|
| 24 |
+
You do not have to comply with the license for elements of the material in
|
| 25 |
+
the public domain or where your use is permitted by an applicable exception
|
| 26 |
+
or limitation.
|
| 27 |
+
|
| 28 |
+
No warranties are given. The license may not give you all of the permissions
|
| 29 |
+
necessary for your intended use. For example, other rights such as publicity,
|
| 30 |
+
privacy, or moral rights may limit how you use the material.
|
| 31 |
+
|
| 32 |
+
Full legal code: https://creativecommons.org/licenses/by/4.0/legalcode
|
| 33 |
+
Human-readable summary: https://creativecommons.org/licenses/by/4.0/
|
| 34 |
+
|
| 35 |
+
────────────────────────────────────────────────────────────────────────────
|
| 36 |
+
|
| 37 |
+
Citation:
|
| 38 |
+
|
| 39 |
+
Anonymous. ActiveVision: An Exam for Active Observers.
|
| 40 |
+
NeurIPS 2026 Datasets and Benchmarks Track (under review).
|
README.md
ADDED
|
@@ -0,0 +1,106 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: cc-by-4.0
|
| 3 |
+
pretty_name: ActiveVision
|
| 4 |
+
size_categories:
|
| 5 |
+
- n<1K
|
| 6 |
+
task_categories:
|
| 7 |
+
- visual-question-answering
|
| 8 |
+
- image-classification
|
| 9 |
+
language:
|
| 10 |
+
- en
|
| 11 |
+
tags:
|
| 12 |
+
- active-perception
|
| 13 |
+
- active-vision
|
| 14 |
+
- multimodal
|
| 15 |
+
- benchmark
|
| 16 |
+
- synthetic
|
| 17 |
+
- mllm-evaluation
|
| 18 |
+
- visual-reasoning
|
| 19 |
+
configs:
|
| 20 |
+
- config_name: default
|
| 21 |
+
data_files:
|
| 22 |
+
- split: test
|
| 23 |
+
path: data/manifest.json
|
| 24 |
+
---
|
| 25 |
+
|
| 26 |
+
# ActiveVision
|
| 27 |
+
|
| 28 |
+
An exam for active observers — a benchmark for diagnosing whether multimodal large language models can iteratively look at an image during reasoning, instead of compressing it into a fixed embedding once.
|
| 29 |
+
|
| 30 |
+
## What's in this archive
|
| 31 |
+
|
| 32 |
+
| Path | Contents |
|
| 33 |
+
|---|---|
|
| 34 |
+
| `data/images/` | 85 photorealistic PNGs, the released benchmark images |
|
| 35 |
+
| `data/manifest.json` | Canonical index — one record per instance with `id`, `task`, `category`, `image`, `image_sha256`, `image_source_filename`, `question`, `answer` |
|
| 36 |
+
| `data/annotations/<task>.jsonl` | Per-task verification metadata (5 records per file × 17 tasks = 85). Includes the structural ground truth used to compute each answer (region adjacency, arrow chains, traversal paths, Hausdorff distances, etc.) |
|
| 37 |
+
| `code/<category>/<task>/creation.py` | Seedable, deterministic generator. The released images at v0.4 are produced at `--difficulty 4`. |
|
| 38 |
+
| `code/<category>/<task>/creation.md` | Per-task design and anti-shortcut spec (where present). |
|
| 39 |
+
| `code/<category>/<task>/data.json` | Per-task definition: shared question text, answer format. |
|
| 40 |
+
| `code/gpt_image_prompts.json` | One gpt-image-2 image-edit prompt per task, used to re-render the matplotlib structural draft as a photorealistic variant while preserving the discriminative structure. |
|
| 41 |
+
| `code/scope.md` | Project specification: the three task families and the six shortcut classes the design defeats. |
|
| 42 |
+
| `croissant.json` | Croissant 1.0 + Croissant-RAI 1.0 metadata for this dataset. |
|
| 43 |
+
| `LICENSE` | CC BY 4.0. |
|
| 44 |
+
|
| 45 |
+
## Statistics
|
| 46 |
+
|
| 47 |
+
- **85 instances**, 17 tasks, 3 task families.
|
| 48 |
+
- **Distributed Scanning** (25 instances, 5 tasks): attribute_group_counting, bounded_faces_counting, counting_connected_components, counting_regions, tangled_loops.
|
| 49 |
+
- **Sequential Traversal** (25 instances, 5 tasks): arrow_chain, color_zone_sequence, line_intersections, maze, traverse_ordering.
|
| 50 |
+
- **Visual Attribute Transfer** (35 instances, 7 tasks): constellation_match_count, contour_silhouette_count, spot_the_contour_diff, spot_the_field_diff, spot_the_signal_diff, spot_the_stroke_diff, stroke_gesture_count.
|
| 51 |
+
|
| 52 |
+
## Loading
|
| 53 |
+
|
| 54 |
+
```python
|
| 55 |
+
import json, pathlib
|
| 56 |
+
root = pathlib.Path("data_neurips2026")
|
| 57 |
+
manifest = json.loads((root / "data" / "manifest.json").read_text())
|
| 58 |
+
for item in manifest:
|
| 59 |
+
image_path = root / item["image"]
|
| 60 |
+
question = item["question"]
|
| 61 |
+
gold = item["answer"]
|
| 62 |
+
```
|
| 63 |
+
|
| 64 |
+
## Generation pipeline
|
| 65 |
+
|
| 66 |
+
The pipeline is the artifact. Every benchmark image is produced in two deterministic stages:
|
| 67 |
+
|
| 68 |
+
1. **Geometric draft (matplotlib).** `creation.py --seed S --difficulty 4` lays out a structural specification (region partition, arrow positions, maze graph, brush-stroke field, etc.) and computes the answer in closed form. Output: a plain matplotlib PNG.
|
| 69 |
+
2. **Photorealistic re-render (gpt-image-2).** The matplotlib draft is sent to OpenAI gpt-image-2 via the image-edit endpoint, with a per-task prompt from `code/gpt_image_prompts.json` that preserves silhouettes, positions, counts, and labels but replaces the surface material with a photorealistic style (stones on sand, hedge maze from above, starfield, etc.).
|
| 70 |
+
|
| 71 |
+
The released benchmark contains only the Stage-2 images. Held-out splits with unpublished seeds and additional difficulties can be regenerated from the included generators.
|
| 72 |
+
|
| 73 |
+
## Anti-shortcut design
|
| 74 |
+
|
| 75 |
+
Every task is constructed to defeat six shortcut classes (full discussion in `code/scope.md`):
|
| 76 |
+
|
| 77 |
+
1. One-shot symbolic perception, then text reasoning.
|
| 78 |
+
2. End-to-end CV scripting (OCR / edge / blob / vector tracing).
|
| 79 |
+
3. Prior leakage / answer-distribution memorisation.
|
| 80 |
+
4. Gestalt heuristics that bypass step-by-step tracing.
|
| 81 |
+
5. Memorisation of the released benchmark (mitigated because the *pipeline* is released, not just the data).
|
| 82 |
+
6. Tool-using zoom/crop/OCR loops.
|
| 83 |
+
|
| 84 |
+
## Responsible AI
|
| 85 |
+
|
| 86 |
+
See `croissant.json` for the full RAI block. Headlines:
|
| 87 |
+
|
| 88 |
+
- **Synthetic only**: 100% synthetic. No human subjects, no PII, no real-world events.
|
| 89 |
+
- **Use cases**: testing and validation. **Not for training.**
|
| 90 |
+
- **Limitations**: small evaluation set; adversarial-by-design (not predictive of general vision-language ability); photorealistic re-renders depend on a closed-source service.
|
| 91 |
+
- **License**: [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/).
|
| 92 |
+
|
| 93 |
+
## Validating the Croissant file
|
| 94 |
+
|
| 95 |
+
Before submission, validate `croissant.json` with the official Croissant validator at:
|
| 96 |
+
|
| 97 |
+
> https://huggingface.co/spaces/JoaquinVanschoren/croissant-checker
|
| 98 |
+
|
| 99 |
+
(Run well in advance of any submission deadline — the doc warns of heavy load near deadlines.)
|
| 100 |
+
|
| 101 |
+
## Citation
|
| 102 |
+
|
| 103 |
+
```
|
| 104 |
+
Anonymous. ActiveVision: An Exam for Active Observers.
|
| 105 |
+
NeurIPS 2026 Datasets and Benchmarks Track (under review).
|
| 106 |
+
```
|
code/distributed_scanning/attribute_group_counting/creation.md
ADDED
|
@@ -0,0 +1,28 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Attribute Group Counting
|
| 2 |
+
|
| 3 |
+
## Goal
|
| 4 |
+
|
| 5 |
+
Scatter many small visual elements across the canvas. All elements are the same color — groups are distinguished **only by shape**. Each shape is an **irregular random blob** generated via Fourier harmonics in polar coordinates. They cannot be described by simple geometric names (circle, square, etc.), forcing the model to perform visual pairing rather than textual identification. The model must scan the entire image, visually compare every element's silhouette, and count how many distinct groups have at least 2 members.
|
| 6 |
+
|
| 7 |
+
## Element library
|
| 8 |
+
|
| 9 |
+
- Shapes: procedurally generated irregular blobs (random Fourier harmonics, 8-14 lobes, unique per group)
|
| 10 |
+
- Color: uniform dark gray for all elements (no color differentiation)
|
| 11 |
+
- Texture: solid fill for all elements (no texture differentiation)
|
| 12 |
+
|
| 13 |
+
A group identity is determined solely by shape. Each group has 2-5 elements, all sharing the exact same silhouette. There are no decoys (every element belongs to a group of size ≥2).
|
| 14 |
+
|
| 15 |
+
## Question
|
| 16 |
+
|
| 17 |
+
"How many distinct groups of matching shapes are in the image? All shapes are the same color. Two shapes belong to the same group if and only if they have exactly the same silhouette. The shapes are irregular and cannot be described by simple geometric names — you must visually compare them. Count the number of distinct shape groups and report the total."
|
| 18 |
+
|
| 19 |
+
## Generation
|
| 20 |
+
|
| 21 |
+
1. Pick `K` (number of groups), e.g. 4-9.
|
| 22 |
+
2. Generate `K` distinct random blob shapes using Fourier harmonics in polar coordinates.
|
| 23 |
+
3. For each group, sample size in 2-5.
|
| 24 |
+
4. Place all elements at random positions with minimum spacing so they don't overlap.
|
| 25 |
+
5. Render using matplotlib for smooth, anti-aliased outlines.
|
| 26 |
+
6. Store annotations.
|
| 27 |
+
|
| 28 |
+
GT = `K`.
|
code/distributed_scanning/attribute_group_counting/creation.py
ADDED
|
@@ -0,0 +1,419 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
from __future__ import annotations
|
| 2 |
+
|
| 3 |
+
import argparse
|
| 4 |
+
import json
|
| 5 |
+
import math
|
| 6 |
+
import random
|
| 7 |
+
from pathlib import Path
|
| 8 |
+
from typing import Dict, List, Tuple
|
| 9 |
+
|
| 10 |
+
import matplotlib
|
| 11 |
+
matplotlib.use("Agg")
|
| 12 |
+
import matplotlib.pyplot as plt
|
| 13 |
+
import matplotlib.patches as mpatches
|
| 14 |
+
from matplotlib.path import Path as MplPath
|
| 15 |
+
import numpy as np
|
| 16 |
+
|
| 17 |
+
|
| 18 |
+
# ---------------------------------------------------------------------------
|
| 19 |
+
# Random irregular shape generation
|
| 20 |
+
# ---------------------------------------------------------------------------
|
| 21 |
+
|
| 22 |
+
def generate_blob_vertices(rng: random.Random,
|
| 23 |
+
fourier_perturbation_amplitude: float = 0.20,
|
| 24 |
+
num_samples: int = 80) -> List[Tuple[float, float]]:
|
| 25 |
+
"""Generate a random irregular closed shape as unit-radius vertices.
|
| 26 |
+
|
| 27 |
+
``fourier_perturbation_amplitude`` scales the harmonic amplitude budget.
|
| 28 |
+
At the default of 0.20 we match the original shape character; smaller
|
| 29 |
+
values produce blobs that are closer to a smooth near-circular profile,
|
| 30 |
+
making silhouettes harder to distinguish visually.
|
| 31 |
+
"""
|
| 32 |
+
# Vary structure dramatically across shapes
|
| 33 |
+
num_lobes = rng.randint(3, 12)
|
| 34 |
+
base_radius = rng.uniform(0.5, 0.8)
|
| 35 |
+
|
| 36 |
+
# Random harmonics with wide amplitude range. Scale amplitudes by the
|
| 37 |
+
# perturbation factor so that smaller perturbation => gentler bumps.
|
| 38 |
+
amp_scale = max(fourier_perturbation_amplitude, 1e-6) / 0.20
|
| 39 |
+
harmonics = []
|
| 40 |
+
for k in range(2, num_lobes + 1):
|
| 41 |
+
amp = rng.uniform(0.08, 0.45) / (k ** rng.uniform(0.3, 0.7))
|
| 42 |
+
harmonics.append((k, amp * amp_scale, rng.uniform(0, 2 * math.pi)))
|
| 43 |
+
|
| 44 |
+
# Optionally add one dominant low-frequency lobe for variety
|
| 45 |
+
if rng.random() < 0.5: # unrelated coin (kept as-is)
|
| 46 |
+
dom_freq = rng.randint(2, 4)
|
| 47 |
+
dom_amp = rng.uniform(0.15, 0.35) * amp_scale
|
| 48 |
+
dom_phase = rng.uniform(0, 2 * math.pi)
|
| 49 |
+
harmonics.append((dom_freq, dom_amp, dom_phase))
|
| 50 |
+
|
| 51 |
+
# Build polar radius function
|
| 52 |
+
angles = [2 * math.pi * i / num_samples for i in range(num_samples)]
|
| 53 |
+
radii = []
|
| 54 |
+
for a in angles:
|
| 55 |
+
r = base_radius
|
| 56 |
+
for freq, amp, phase in harmonics:
|
| 57 |
+
r += amp * math.sin(freq * a + phase)
|
| 58 |
+
radii.append(max(r, 0.12))
|
| 59 |
+
|
| 60 |
+
# Convert to cartesian
|
| 61 |
+
pts = [(r * math.cos(a), r * math.sin(a)) for r, a in zip(radii, angles)]
|
| 62 |
+
|
| 63 |
+
# Random aspect stretch + rotation for more variety
|
| 64 |
+
stretch_x = rng.uniform(0.6, 1.0)
|
| 65 |
+
stretch_y = rng.uniform(0.6, 1.0)
|
| 66 |
+
rot = rng.uniform(0, 2 * math.pi)
|
| 67 |
+
cos_r, sin_r = math.cos(rot), math.sin(rot)
|
| 68 |
+
stretched = []
|
| 69 |
+
for x, y in pts:
|
| 70 |
+
sx, sy = x * stretch_x, y * stretch_y
|
| 71 |
+
rx = sx * cos_r - sy * sin_r
|
| 72 |
+
ry = sx * sin_r + sy * cos_r
|
| 73 |
+
stretched.append((rx, ry))
|
| 74 |
+
pts = stretched
|
| 75 |
+
|
| 76 |
+
# Normalise so max extent = 1.0
|
| 77 |
+
max_ext = max(max(abs(x), abs(y)) for x, y in pts)
|
| 78 |
+
if max_ext > 0:
|
| 79 |
+
pts = [(x / max_ext, y / max_ext) for x, y in pts]
|
| 80 |
+
|
| 81 |
+
return pts
|
| 82 |
+
|
| 83 |
+
|
| 84 |
+
def _silhouette_distance(a: List[Tuple[float, float]],
|
| 85 |
+
b: List[Tuple[float, float]],
|
| 86 |
+
num_samples: int = 64) -> float:
|
| 87 |
+
"""Rotation/reflection-invariant silhouette distance between two unit blobs.
|
| 88 |
+
|
| 89 |
+
We compare the two shapes via their polar radius signatures sampled on
|
| 90 |
+
a common angular grid. We minimise over cyclic rotations and a reflection
|
| 91 |
+
to account for the shapes' arbitrary orientation.
|
| 92 |
+
"""
|
| 93 |
+
def radial_signature(pts: List[Tuple[float, float]]) -> np.ndarray:
|
| 94 |
+
xs = np.asarray([p[0] for p in pts])
|
| 95 |
+
ys = np.asarray([p[1] for p in pts])
|
| 96 |
+
# Centre at centroid
|
| 97 |
+
xs = xs - xs.mean()
|
| 98 |
+
ys = ys - ys.mean()
|
| 99 |
+
angs = np.arctan2(ys, xs) % (2 * np.pi)
|
| 100 |
+
rads = np.hypot(xs, ys)
|
| 101 |
+
grid = np.linspace(0, 2 * np.pi, num_samples, endpoint=False)
|
| 102 |
+
order = np.argsort(angs)
|
| 103 |
+
angs_sorted = angs[order]
|
| 104 |
+
rads_sorted = rads[order]
|
| 105 |
+
# Extend for wrap-around interpolation
|
| 106 |
+
ang_ext = np.concatenate([angs_sorted - 2 * np.pi, angs_sorted,
|
| 107 |
+
angs_sorted + 2 * np.pi])
|
| 108 |
+
rad_ext = np.concatenate([rads_sorted, rads_sorted, rads_sorted])
|
| 109 |
+
sig = np.interp(grid, ang_ext, rad_ext)
|
| 110 |
+
# Normalise
|
| 111 |
+
m = sig.max()
|
| 112 |
+
if m > 0:
|
| 113 |
+
sig = sig / m
|
| 114 |
+
return sig
|
| 115 |
+
|
| 116 |
+
sa = radial_signature(a)
|
| 117 |
+
sb = radial_signature(b)
|
| 118 |
+
sb_rev = sb[::-1]
|
| 119 |
+
|
| 120 |
+
best = float("inf")
|
| 121 |
+
for sig_b in (sb, sb_rev):
|
| 122 |
+
for shift in range(num_samples):
|
| 123 |
+
rolled = np.roll(sig_b, shift)
|
| 124 |
+
d = float(np.mean(np.abs(sa - rolled)))
|
| 125 |
+
if d < best:
|
| 126 |
+
best = d
|
| 127 |
+
return best
|
| 128 |
+
|
| 129 |
+
|
| 130 |
+
def generate_distinct_shapes(rng: random.Random, n: int,
|
| 131 |
+
fourier_perturbation_amplitude: float,
|
| 132 |
+
min_pairwise_silhouette_distance: float,
|
| 133 |
+
max_attempts_per_shape: int = 60,
|
| 134 |
+
) -> List[List[Tuple[float, float]]]:
|
| 135 |
+
"""Generate ``n`` blob shapes whose pairwise silhouette distances all exceed
|
| 136 |
+
``min_pairwise_silhouette_distance``.
|
| 137 |
+
|
| 138 |
+
Falls back to the best-available shape after ``max_attempts_per_shape``
|
| 139 |
+
retries to avoid pathological infinite loops at tight thresholds.
|
| 140 |
+
"""
|
| 141 |
+
shapes: List[List[Tuple[float, float]]] = []
|
| 142 |
+
for _ in range(n):
|
| 143 |
+
best_candidate = None
|
| 144 |
+
best_min_dist = -1.0
|
| 145 |
+
for _attempt in range(max_attempts_per_shape):
|
| 146 |
+
verts = generate_blob_vertices(rng, fourier_perturbation_amplitude)
|
| 147 |
+
if not shapes:
|
| 148 |
+
shapes.append(verts)
|
| 149 |
+
break
|
| 150 |
+
min_d = min(_silhouette_distance(verts, s) for s in shapes)
|
| 151 |
+
if min_d >= min_pairwise_silhouette_distance:
|
| 152 |
+
shapes.append(verts)
|
| 153 |
+
break
|
| 154 |
+
if min_d > best_min_dist:
|
| 155 |
+
best_min_dist = min_d
|
| 156 |
+
best_candidate = verts
|
| 157 |
+
else:
|
| 158 |
+
# Couldn't satisfy threshold; use best candidate we found.
|
| 159 |
+
shapes.append(best_candidate if best_candidate is not None
|
| 160 |
+
else generate_blob_vertices(rng, fourier_perturbation_amplitude))
|
| 161 |
+
return shapes
|
| 162 |
+
|
| 163 |
+
|
| 164 |
+
# ---------------------------------------------------------------------------
|
| 165 |
+
# Position sampling
|
| 166 |
+
# ---------------------------------------------------------------------------
|
| 167 |
+
|
| 168 |
+
def sample_positions(rng: random.Random, n: int, width: int, height: int,
|
| 169 |
+
margin: int, min_dist: float, max_attempts: int = 8000) -> List[Tuple[float, float]]:
|
| 170 |
+
pts: List[Tuple[float, float]] = []
|
| 171 |
+
for _ in range(max_attempts):
|
| 172 |
+
if len(pts) == n:
|
| 173 |
+
break
|
| 174 |
+
x = rng.uniform(margin, width - margin)
|
| 175 |
+
y = rng.uniform(margin, height - margin)
|
| 176 |
+
if all(math.hypot(x - px, y - py) >= min_dist for px, py in pts):
|
| 177 |
+
pts.append((x, y))
|
| 178 |
+
return pts
|
| 179 |
+
|
| 180 |
+
|
| 181 |
+
# ---------------------------------------------------------------------------
|
| 182 |
+
# Replica sampling
|
| 183 |
+
# ---------------------------------------------------------------------------
|
| 184 |
+
|
| 185 |
+
def _sample_replicas(rng: random.Random, d_val: int) -> int:
|
| 186 |
+
"""Replica count from a flatter geometric: continue with prob 0.8 each
|
| 187 |
+
step, capped at max(2, d). P(k=1)=0.2, P(k=2)=0.16, P(k=3)=0.128, …,
|
| 188 |
+
with the residual mass piling at k=cap. Distribution is much more
|
| 189 |
+
spread-out than p=0.5 → meaningful chance of seeing replica counts
|
| 190 |
+
near the cap, while small counts are still common.
|
| 191 |
+
"""
|
| 192 |
+
cap = max(2, d_val)
|
| 193 |
+
k = 1
|
| 194 |
+
while k < cap and rng.random() < 0.8:
|
| 195 |
+
k += 1
|
| 196 |
+
return k
|
| 197 |
+
|
| 198 |
+
|
| 199 |
+
# ---------------------------------------------------------------------------
|
| 200 |
+
# Sample generation
|
| 201 |
+
# ---------------------------------------------------------------------------
|
| 202 |
+
|
| 203 |
+
ELEMENT_COLOR = "#303030"
|
| 204 |
+
|
| 205 |
+
|
| 206 |
+
def sample_instance(rng: random.Random, width: int, height: int,
|
| 207 |
+
answer_lo: int, answer_hi: int,
|
| 208 |
+
d_val: int,
|
| 209 |
+
fourier_perturbation_amplitude: float,
|
| 210 |
+
min_pairwise_silhouette_distance: float,
|
| 211 |
+
radius: int) -> Dict[str, object] | None:
|
| 212 |
+
num_groups = rng.randint(answer_lo, answer_hi)
|
| 213 |
+
|
| 214 |
+
blob_shapes = generate_distinct_shapes(
|
| 215 |
+
rng,
|
| 216 |
+
num_groups,
|
| 217 |
+
fourier_perturbation_amplitude=fourier_perturbation_amplitude,
|
| 218 |
+
min_pairwise_silhouette_distance=min_pairwise_silhouette_distance,
|
| 219 |
+
)
|
| 220 |
+
|
| 221 |
+
elements: List[Dict[str, object]] = []
|
| 222 |
+
group_records: List[Dict[str, object]] = []
|
| 223 |
+
for gid, blob in enumerate(blob_shapes):
|
| 224 |
+
size = _sample_replicas(rng, d_val)
|
| 225 |
+
for _ in range(size):
|
| 226 |
+
elements.append({
|
| 227 |
+
"shape_id": gid,
|
| 228 |
+
"group": gid,
|
| 229 |
+
})
|
| 230 |
+
group_records.append({
|
| 231 |
+
"id": gid,
|
| 232 |
+
"shape_id": gid,
|
| 233 |
+
"size": size,
|
| 234 |
+
"shape_vertices": [[round(x, 4), round(y, 4)] for x, y in blob],
|
| 235 |
+
})
|
| 236 |
+
|
| 237 |
+
n = len(elements)
|
| 238 |
+
min_dist = radius * 2.4
|
| 239 |
+
positions = sample_positions(rng, n, width, height, margin=radius + 6, min_dist=min_dist)
|
| 240 |
+
if len(positions) < n:
|
| 241 |
+
return None
|
| 242 |
+
|
| 243 |
+
rng.shuffle(positions)
|
| 244 |
+
for el, (x, y) in zip(elements, positions):
|
| 245 |
+
el["x"] = round(x, 2)
|
| 246 |
+
el["y"] = round(y, 2)
|
| 247 |
+
|
| 248 |
+
return {
|
| 249 |
+
"width": width,
|
| 250 |
+
"height": height,
|
| 251 |
+
"num_groups": num_groups,
|
| 252 |
+
"num_elements": n,
|
| 253 |
+
"answer": num_groups,
|
| 254 |
+
"elements": elements,
|
| 255 |
+
"groups": group_records,
|
| 256 |
+
"radius": radius,
|
| 257 |
+
"fourier_perturbation_amplitude": round(fourier_perturbation_amplitude, 5),
|
| 258 |
+
"min_pairwise_silhouette_distance": round(min_pairwise_silhouette_distance, 5),
|
| 259 |
+
"blob_shapes": [[[round(x, 4), round(y, 4)] for x, y in b] for b in blob_shapes],
|
| 260 |
+
}
|
| 261 |
+
|
| 262 |
+
|
| 263 |
+
def render_instance(out_path: Path, record: Dict[str, object]) -> None:
|
| 264 |
+
width = int(record["width"])
|
| 265 |
+
height = int(record["height"])
|
| 266 |
+
radius = float(record["radius"])
|
| 267 |
+
blob_shapes = record["blob_shapes"]
|
| 268 |
+
|
| 269 |
+
fig = plt.figure(figsize=(width / 100, height / 100), dpi=100)
|
| 270 |
+
ax = fig.add_axes([0, 0, 1, 1])
|
| 271 |
+
ax.set_xlim(0, width)
|
| 272 |
+
ax.set_ylim(height, 0)
|
| 273 |
+
ax.axis("off")
|
| 274 |
+
ax.set_facecolor("#faf6ed")
|
| 275 |
+
|
| 276 |
+
for el in record["elements"]:
|
| 277 |
+
cx = float(el["x"])
|
| 278 |
+
cy = float(el["y"])
|
| 279 |
+
blob = blob_shapes[el["shape_id"]]
|
| 280 |
+
|
| 281 |
+
# Scale blob vertices to canvas coordinates
|
| 282 |
+
verts = [(cx + vx * radius, cy + vy * radius) for vx, vy in blob]
|
| 283 |
+
verts.append(verts[0])
|
| 284 |
+
|
| 285 |
+
codes = [MplPath.MOVETO] + [MplPath.LINETO] * (len(verts) - 2) + [MplPath.CLOSEPOLY]
|
| 286 |
+
path = MplPath(verts, codes)
|
| 287 |
+
patch = mpatches.PathPatch(
|
| 288 |
+
path,
|
| 289 |
+
facecolor=ELEMENT_COLOR,
|
| 290 |
+
edgecolor=ELEMENT_COLOR,
|
| 291 |
+
linewidth=1.8,
|
| 292 |
+
alpha=0.92,
|
| 293 |
+
joinstyle="round",
|
| 294 |
+
capstyle="round",
|
| 295 |
+
zorder=2,
|
| 296 |
+
)
|
| 297 |
+
ax.add_patch(patch)
|
| 298 |
+
|
| 299 |
+
fig.savefig(out_path, dpi=100, bbox_inches="tight", pad_inches=0)
|
| 300 |
+
plt.close(fig)
|
| 301 |
+
|
| 302 |
+
|
| 303 |
+
QUESTION = (
|
| 304 |
+
"How many distinct shapes are in the image? "
|
| 305 |
+
"The image contains many small irregular shapes (blobs) scattered across the canvas. "
|
| 306 |
+
"All shapes are the same color. Two shapes belong to the same type if and only if "
|
| 307 |
+
"they have exactly the same silhouette. The shapes are irregular and cannot be described "
|
| 308 |
+
"by simple geometric names — you must visually compare them. "
|
| 309 |
+
"Some shapes may appear only once while others appear multiple times. "
|
| 310 |
+
"Count the total number of distinct shape types and report the count as a positive integer. "
|
| 311 |
+
"Provide your final answer enclosed in <answer>...</answer> tags."
|
| 312 |
+
)
|
| 313 |
+
|
| 314 |
+
|
| 315 |
+
def generate_dataset(rng: random.Random, count: int, output_dir: Path,
|
| 316 |
+
width: int, height: int,
|
| 317 |
+
answer_lo: int, answer_hi: int,
|
| 318 |
+
d_val: int,
|
| 319 |
+
fourier_perturbation_amplitude: float,
|
| 320 |
+
min_pairwise_silhouette_distance: float,
|
| 321 |
+
radius: int) -> None:
|
| 322 |
+
images_dir = output_dir / "images"
|
| 323 |
+
images_dir.mkdir(parents=True, exist_ok=True)
|
| 324 |
+
|
| 325 |
+
# Force evenly-spaced answers across [answer_lo, answer_hi].
|
| 326 |
+
if count > 1:
|
| 327 |
+
forced_targets = [
|
| 328 |
+
int(round(answer_lo + i * (answer_hi - answer_lo) / (count - 1)))
|
| 329 |
+
for i in range(count)
|
| 330 |
+
]
|
| 331 |
+
else:
|
| 332 |
+
forced_targets = [answer_lo]
|
| 333 |
+
print(f"forced group counts: {forced_targets}")
|
| 334 |
+
|
| 335 |
+
records: List[Dict[str, object]] = []
|
| 336 |
+
data_records: List[Dict[str, object]] = []
|
| 337 |
+
for idx in range(count):
|
| 338 |
+
target = forced_targets[idx]
|
| 339 |
+
for _ in range(2000):
|
| 340 |
+
rec = sample_instance(
|
| 341 |
+
rng, width, height,
|
| 342 |
+
answer_lo=target, answer_hi=target,
|
| 343 |
+
d_val=d_val,
|
| 344 |
+
fourier_perturbation_amplitude=fourier_perturbation_amplitude,
|
| 345 |
+
min_pairwise_silhouette_distance=min_pairwise_silhouette_distance,
|
| 346 |
+
radius=radius,
|
| 347 |
+
)
|
| 348 |
+
if rec is not None and rec.get("answer") == target:
|
| 349 |
+
break
|
| 350 |
+
else:
|
| 351 |
+
print(f"Skip {idx} (could not hit target={target})")
|
| 352 |
+
continue
|
| 353 |
+
name = f"attribute_group_counting_{idx:05d}.png"
|
| 354 |
+
render_instance(images_dir / name, rec)
|
| 355 |
+
rec["image"] = f"images/{name}"
|
| 356 |
+
rec["question"] = QUESTION
|
| 357 |
+
records.append(rec)
|
| 358 |
+
data_records.append({"image": rec["image"], "question": QUESTION, "gt": rec["answer"]})
|
| 359 |
+
print(f" [{idx+1}/{count}] groups={rec['answer']} elements={rec['num_elements']}")
|
| 360 |
+
|
| 361 |
+
(output_dir / "annotations.jsonl").write_text(
|
| 362 |
+
"\n".join(json.dumps(r) for r in records) + "\n"
|
| 363 |
+
)
|
| 364 |
+
(output_dir / "data.json").write_text(json.dumps(data_records, indent=4))
|
| 365 |
+
|
| 366 |
+
|
| 367 |
+
def parse_args() -> argparse.Namespace:
|
| 368 |
+
p = argparse.ArgumentParser()
|
| 369 |
+
p.add_argument("--output-root", type=Path, required=True)
|
| 370 |
+
p.add_argument("--count", type=int, default=30)
|
| 371 |
+
p.add_argument("--width", type=int, default=900)
|
| 372 |
+
p.add_argument("--height", type=int, default=900)
|
| 373 |
+
p.add_argument("--radius", type=int, default=36)
|
| 374 |
+
p.add_argument("--seed", type=int, default=42)
|
| 375 |
+
p.add_argument("--difficulty", type=int, default=5,
|
| 376 |
+
help="Integer difficulty >=0; scales distinct-shape count, "
|
| 377 |
+
"replica geometric cap, blob perturbation, silhouette "
|
| 378 |
+
"separation, and canvas area.")
|
| 379 |
+
return p.parse_args()
|
| 380 |
+
|
| 381 |
+
|
| 382 |
+
def main() -> None:
|
| 383 |
+
args = parse_args()
|
| 384 |
+
rng = random.Random(args.seed)
|
| 385 |
+
|
| 386 |
+
d = max(0, int(args.difficulty))
|
| 387 |
+
|
| 388 |
+
# Difficulty formulas.
|
| 389 |
+
answer_lo = 3
|
| 390 |
+
answer_hi = 5 + d
|
| 391 |
+
fourier_perturbation_amplitude = 0.20 / (1 + 0.2 * d)
|
| 392 |
+
min_pairwise_silhouette_distance = 0.22 / (1 + 0.2 * d)
|
| 393 |
+
|
| 394 |
+
# Canvas scaling: area ~ total shape instances (answer_hi * E[replicas=2]).
|
| 395 |
+
N_d = (5 + d) * 2
|
| 396 |
+
N_0 = 5 * 2
|
| 397 |
+
s = math.sqrt(max(1.0, N_d / N_0))
|
| 398 |
+
args.width = int(round(args.width * s))
|
| 399 |
+
args.height = int(round(args.height * s))
|
| 400 |
+
|
| 401 |
+
print(f"difficulty={d} answer_range=[{answer_lo},{answer_hi}] "
|
| 402 |
+
f"fourier_amp={fourier_perturbation_amplitude:.4f} "
|
| 403 |
+
f"silhouette_min={min_pairwise_silhouette_distance:.4f} "
|
| 404 |
+
f"canvas={args.width}x{args.height}")
|
| 405 |
+
|
| 406 |
+
generate_dataset(
|
| 407 |
+
rng=rng, count=args.count, output_dir=args.output_root,
|
| 408 |
+
width=args.width, height=args.height,
|
| 409 |
+
answer_lo=answer_lo, answer_hi=answer_hi,
|
| 410 |
+
d_val=d,
|
| 411 |
+
fourier_perturbation_amplitude=fourier_perturbation_amplitude,
|
| 412 |
+
min_pairwise_silhouette_distance=min_pairwise_silhouette_distance,
|
| 413 |
+
radius=args.radius,
|
| 414 |
+
)
|
| 415 |
+
print(f"Saved to {args.output_root}")
|
| 416 |
+
|
| 417 |
+
|
| 418 |
+
if __name__ == "__main__":
|
| 419 |
+
main()
|
code/distributed_scanning/attribute_group_counting/data.json
ADDED
|
@@ -0,0 +1,27 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
[
|
| 2 |
+
{
|
| 3 |
+
"image": "images/attribute_group_counting_00000.png",
|
| 4 |
+
"question": "How many distinct shapes are in the image? The image contains many small pieces scattered across the canvas. Two pieces belong to the same type if and only if they have exactly the same silhouette. Pieces may differ in surface texture or colour tone \u2014 only the silhouette outline matters. Count the total number of distinct silhouette types and report the count as a positive integer. Provide your final answer enclosed in <answer>...</answer> tags.",
|
| 5 |
+
"gt": 3
|
| 6 |
+
},
|
| 7 |
+
{
|
| 8 |
+
"image": "images/attribute_group_counting_00001.png",
|
| 9 |
+
"question": "How many distinct shapes are in the image? The image contains many small pieces scattered across the canvas. Two pieces belong to the same type if and only if they have exactly the same silhouette. Pieces may differ in surface texture or colour tone \u2014 only the silhouette outline matters. Count the total number of distinct silhouette types and report the count as a positive integer. Provide your final answer enclosed in <answer>...</answer> tags.",
|
| 10 |
+
"gt": 5
|
| 11 |
+
},
|
| 12 |
+
{
|
| 13 |
+
"image": "images/attribute_group_counting_00002.png",
|
| 14 |
+
"question": "How many distinct shapes are in the image? The image contains many small pieces scattered across the canvas. Two pieces belong to the same type if and only if they have exactly the same silhouette. Pieces may differ in surface texture or colour tone \u2014 only the silhouette outline matters. Count the total number of distinct silhouette types and report the count as a positive integer. Provide your final answer enclosed in <answer>...</answer> tags.",
|
| 15 |
+
"gt": 6
|
| 16 |
+
},
|
| 17 |
+
{
|
| 18 |
+
"image": "images/attribute_group_counting_00003.png",
|
| 19 |
+
"question": "How many distinct shapes are in the image? The image contains many small pieces scattered across the canvas. Two pieces belong to the same type if and only if they have exactly the same silhouette. Pieces may differ in surface texture or colour tone \u2014 only the silhouette outline matters. Count the total number of distinct silhouette types and report the count as a positive integer. Provide your final answer enclosed in <answer>...</answer> tags.",
|
| 20 |
+
"gt": 8
|
| 21 |
+
},
|
| 22 |
+
{
|
| 23 |
+
"image": "images/attribute_group_counting_00004.png",
|
| 24 |
+
"question": "How many distinct shapes are in the image? The image contains many small pieces scattered across the canvas. Two pieces belong to the same type if and only if they have exactly the same silhouette. Pieces may differ in surface texture or colour tone \u2014 only the silhouette outline matters. Count the total number of distinct silhouette types and report the count as a positive integer. Provide your final answer enclosed in <answer>...</answer> tags.",
|
| 25 |
+
"gt": 10
|
| 26 |
+
}
|
| 27 |
+
]
|
code/distributed_scanning/bounded_faces_counting/creation.md
ADDED
|
@@ -0,0 +1,30 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Bounded Faces Counting
|
| 2 |
+
|
| 3 |
+
## Goal
|
| 4 |
+
|
| 5 |
+
Render several closed curves (ellipses) on a canvas. The curves overlap to form a planar arrangement that subdivides the interior into one or more *bounded faces*. The model must scan the entire image and count how many distinct enclosed regions exist.
|
| 6 |
+
|
| 7 |
+
This is a *distributed scanning* task — there is no path to follow. The model has to look at the whole arrangement and tally every enclosed region.
|
| 8 |
+
|
| 9 |
+
## Avoiding ambiguity
|
| 10 |
+
|
| 11 |
+
The user requested that there be no very small or very thin regions, and that curve overlaps should not create ambiguous regions. The generator enforces this by:
|
| 12 |
+
|
| 13 |
+
- Stroking each loop with a generous thickness (visually clear).
|
| 14 |
+
- Rasterizing the figure and using a flood fill to find all interior connected components.
|
| 15 |
+
- Rejecting any sample where the smallest interior region is below a minimum area threshold (so no tiny slivers).
|
| 16 |
+
- Rejecting any sample where the bounding box of an interior region is unusually thin (so no needle-like regions).
|
| 17 |
+
|
| 18 |
+
## Question
|
| 19 |
+
|
| 20 |
+
"How many distinct enclosed regions (bounded faces) are visible in this image? An enclosed region is a maximal area that is fully surrounded by the drawn curves on every side, with no opening to the outside background. The unbounded outside area does not count. Each enclosed region should be counted exactly once, even if it is shaped irregularly. Count every enclosed region in the entire image and report the total as a positive integer."
|
| 21 |
+
|
| 22 |
+
## Generation
|
| 23 |
+
|
| 24 |
+
1. Pick `K` (number of loops), e.g. 3-6.
|
| 25 |
+
2. For each loop, sample random center, semi-axes, and rotation.
|
| 26 |
+
3. Stroke each loop as a thick outline on the canvas.
|
| 27 |
+
4. Flood-fill the background from the canvas border.
|
| 28 |
+
5. Find all interior connected components (regions not reached by the outer flood).
|
| 29 |
+
6. Reject the sample if any region has area below `min_area` or has a thin bounding box.
|
| 30 |
+
7. GT = number of interior components.
|
code/distributed_scanning/bounded_faces_counting/creation.py
ADDED
|
@@ -0,0 +1,473 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
from __future__ import annotations
|
| 2 |
+
|
| 3 |
+
import argparse
|
| 4 |
+
import json
|
| 5 |
+
import math
|
| 6 |
+
import random
|
| 7 |
+
from pathlib import Path
|
| 8 |
+
from typing import Dict, List, Tuple
|
| 9 |
+
|
| 10 |
+
import matplotlib
|
| 11 |
+
matplotlib.use("Agg")
|
| 12 |
+
import matplotlib.pyplot as plt
|
| 13 |
+
import numpy as np
|
| 14 |
+
|
| 15 |
+
|
| 16 |
+
Cell = Tuple[int, int]
|
| 17 |
+
|
| 18 |
+
|
| 19 |
+
# ---------------------------------------------------------------------------
|
| 20 |
+
# Geometry helpers
|
| 21 |
+
# ---------------------------------------------------------------------------
|
| 22 |
+
|
| 23 |
+
|
| 24 |
+
def _cross(ox: float, oy: float, px: float, py: float, qx: float, qy: float) -> float:
|
| 25 |
+
return (px - ox) * (qy - oy) - (py - oy) * (qx - ox)
|
| 26 |
+
|
| 27 |
+
|
| 28 |
+
def segments_intersect_properly(
|
| 29 |
+
ax: float, ay: float, bx: float, by: float,
|
| 30 |
+
cx: float, cy: float, dx: float, dy: float,
|
| 31 |
+
) -> bool:
|
| 32 |
+
"""True if segment AB *properly* crosses segment CD (shared endpoints don't count)."""
|
| 33 |
+
d1 = _cross(cx, cy, dx, dy, ax, ay)
|
| 34 |
+
d2 = _cross(cx, cy, dx, dy, bx, by)
|
| 35 |
+
d3 = _cross(ax, ay, bx, by, cx, cy)
|
| 36 |
+
d4 = _cross(ax, ay, bx, by, dx, dy)
|
| 37 |
+
if ((d1 > 0 and d2 < 0) or (d1 < 0 and d2 > 0)) and \
|
| 38 |
+
((d3 > 0 and d4 < 0) or (d3 < 0 and d4 > 0)):
|
| 39 |
+
return True
|
| 40 |
+
return False
|
| 41 |
+
|
| 42 |
+
|
| 43 |
+
def point_seg_dist(px: float, py: float, ax: float, ay: float, bx: float, by: float) -> float:
|
| 44 |
+
dx = bx - ax
|
| 45 |
+
dy = by - ay
|
| 46 |
+
len_sq = dx * dx + dy * dy
|
| 47 |
+
if len_sq < 1e-12:
|
| 48 |
+
return math.hypot(px - ax, py - ay)
|
| 49 |
+
t = max(0.0, min(1.0, ((px - ax) * dx + (py - ay) * dy) / len_sq))
|
| 50 |
+
return math.hypot(px - (ax + t * dx), py - (ay + t * dy))
|
| 51 |
+
|
| 52 |
+
|
| 53 |
+
# ---------------------------------------------------------------------------
|
| 54 |
+
# Union-Find
|
| 55 |
+
# ---------------------------------------------------------------------------
|
| 56 |
+
|
| 57 |
+
|
| 58 |
+
class UnionFind:
|
| 59 |
+
def __init__(self, n: int) -> None:
|
| 60 |
+
self.parent = list(range(n))
|
| 61 |
+
self.rank = [0] * n
|
| 62 |
+
self.num_sets = n
|
| 63 |
+
|
| 64 |
+
def find(self, x: int) -> int:
|
| 65 |
+
while self.parent[x] != x:
|
| 66 |
+
self.parent[x] = self.parent[self.parent[x]]
|
| 67 |
+
x = self.parent[x]
|
| 68 |
+
return x
|
| 69 |
+
|
| 70 |
+
def union(self, a: int, b: int) -> bool:
|
| 71 |
+
ra, rb = self.find(a), self.find(b)
|
| 72 |
+
if ra == rb:
|
| 73 |
+
return False
|
| 74 |
+
if self.rank[ra] < self.rank[rb]:
|
| 75 |
+
ra, rb = rb, ra
|
| 76 |
+
self.parent[rb] = ra
|
| 77 |
+
if self.rank[ra] == self.rank[rb]:
|
| 78 |
+
self.rank[ra] += 1
|
| 79 |
+
self.num_sets -= 1
|
| 80 |
+
return True
|
| 81 |
+
|
| 82 |
+
|
| 83 |
+
# ---------------------------------------------------------------------------
|
| 84 |
+
# Graph construction — planar, no-dot-crossing edge set
|
| 85 |
+
# ---------------------------------------------------------------------------
|
| 86 |
+
|
| 87 |
+
|
| 88 |
+
def place_dots(
|
| 89 |
+
rng: random.Random,
|
| 90 |
+
grid_rows: int,
|
| 91 |
+
grid_cols: int,
|
| 92 |
+
num_dots: int,
|
| 93 |
+
min_gap: float,
|
| 94 |
+
border_margin: int = 5,
|
| 95 |
+
max_attempts: int = 8000,
|
| 96 |
+
) -> List[Cell]:
|
| 97 |
+
cells: List[Cell] = []
|
| 98 |
+
lo_r, hi_r = border_margin, grid_rows - border_margin
|
| 99 |
+
lo_c, hi_c = border_margin, grid_cols - border_margin
|
| 100 |
+
for _ in range(max_attempts):
|
| 101 |
+
if len(cells) == num_dots:
|
| 102 |
+
break
|
| 103 |
+
r = rng.randint(lo_r, hi_r - 1)
|
| 104 |
+
c = rng.randint(lo_c, hi_c - 1)
|
| 105 |
+
if all(math.hypot(r - er, c - ec) >= min_gap for er, ec in cells):
|
| 106 |
+
cells.append((r, c))
|
| 107 |
+
return cells
|
| 108 |
+
|
| 109 |
+
|
| 110 |
+
def build_planar_edge_set(
|
| 111 |
+
dots: List[Cell],
|
| 112 |
+
dot_radius: float,
|
| 113 |
+
max_edge_len: float,
|
| 114 |
+
) -> List[Tuple[int, int, float]]:
|
| 115 |
+
n = len(dots)
|
| 116 |
+
candidates: List[Tuple[float, int, int]] = []
|
| 117 |
+
for i in range(n):
|
| 118 |
+
ri, ci = dots[i]
|
| 119 |
+
for j in range(i + 1, n):
|
| 120 |
+
rj, cj = dots[j]
|
| 121 |
+
d = math.hypot(ri - rj, ci - cj)
|
| 122 |
+
if d > max_edge_len:
|
| 123 |
+
continue
|
| 124 |
+
clear = True
|
| 125 |
+
for k in range(n):
|
| 126 |
+
if k == i or k == j:
|
| 127 |
+
continue
|
| 128 |
+
if point_seg_dist(dots[k][0], dots[k][1], ri, ci, rj, cj) < dot_radius + 0.8:
|
| 129 |
+
clear = False
|
| 130 |
+
break
|
| 131 |
+
if clear:
|
| 132 |
+
candidates.append((d, i, j))
|
| 133 |
+
candidates.sort()
|
| 134 |
+
|
| 135 |
+
accepted: List[Tuple[int, int, float]] = []
|
| 136 |
+
seg_coords: List[Tuple[float, float, float, float]] = []
|
| 137 |
+
for dist, i, j in candidates:
|
| 138 |
+
ri, ci = dots[i]
|
| 139 |
+
rj, cj = dots[j]
|
| 140 |
+
crosses = False
|
| 141 |
+
for ax, ay, bx, by in seg_coords:
|
| 142 |
+
if (ri == ax and ci == ay) or (ri == bx and ci == by) or \
|
| 143 |
+
(rj == ax and cj == ay) or (rj == bx and cj == by):
|
| 144 |
+
continue
|
| 145 |
+
if segments_intersect_properly(ri, ci, rj, cj, ax, ay, bx, by):
|
| 146 |
+
crosses = True
|
| 147 |
+
break
|
| 148 |
+
if not crosses:
|
| 149 |
+
accepted.append((i, j, dist))
|
| 150 |
+
seg_coords.append((float(ri), float(ci), float(rj), float(cj)))
|
| 151 |
+
|
| 152 |
+
return accepted
|
| 153 |
+
|
| 154 |
+
|
| 155 |
+
# ---------------------------------------------------------------------------
|
| 156 |
+
# Graph construction for bounded faces
|
| 157 |
+
# ---------------------------------------------------------------------------
|
| 158 |
+
|
| 159 |
+
|
| 160 |
+
def build_graph_with_faces(
|
| 161 |
+
rng: random.Random,
|
| 162 |
+
n: int,
|
| 163 |
+
planar_edges: List[Tuple[int, int, float]],
|
| 164 |
+
target_faces: int,
|
| 165 |
+
) -> Tuple[List[Tuple[int, int]], int, int] | None:
|
| 166 |
+
"""Build a planar graph targeting a specific number of bounded faces.
|
| 167 |
+
|
| 168 |
+
Strategy:
|
| 169 |
+
1. Build a spanning forest from shuffled edges (connecting as many dots
|
| 170 |
+
as possible into one component).
|
| 171 |
+
2. Each additional intra-component edge beyond the spanning forest creates
|
| 172 |
+
exactly one new bounded face (Euler: F_bounded = E - V + C).
|
| 173 |
+
3. Add extra edges until we reach the target face count.
|
| 174 |
+
|
| 175 |
+
Returns (edges, bounded_faces, num_components) or None if infeasible.
|
| 176 |
+
"""
|
| 177 |
+
uf = UnionFind(n)
|
| 178 |
+
|
| 179 |
+
# Shuffle edges, biased toward shorter ones
|
| 180 |
+
mid = len(planar_edges) // 2
|
| 181 |
+
short = list(planar_edges[:mid])
|
| 182 |
+
long = list(planar_edges[mid:])
|
| 183 |
+
rng.shuffle(short)
|
| 184 |
+
rng.shuffle(long)
|
| 185 |
+
shuffled = short + long
|
| 186 |
+
|
| 187 |
+
# Phase 1: build spanning forest (connect everything into one component ideally)
|
| 188 |
+
tree_edges: List[Tuple[int, int]] = []
|
| 189 |
+
extra_edges: List[Tuple[int, int]] = []
|
| 190 |
+
|
| 191 |
+
for i, j, d in shuffled:
|
| 192 |
+
if uf.find(i) != uf.find(j):
|
| 193 |
+
uf.union(i, j)
|
| 194 |
+
tree_edges.append((i, j))
|
| 195 |
+
else:
|
| 196 |
+
extra_edges.append((i, j))
|
| 197 |
+
|
| 198 |
+
num_components = uf.num_sets
|
| 199 |
+
|
| 200 |
+
# Phase 2: add extra edges to create faces
|
| 201 |
+
# bounded_faces = E - V + C = (tree + extra) - V + C
|
| 202 |
+
# spanning forest has V - C edges, so faces = num_extra_added
|
| 203 |
+
rng.shuffle(extra_edges)
|
| 204 |
+
|
| 205 |
+
if len(extra_edges) < target_faces:
|
| 206 |
+
return None
|
| 207 |
+
|
| 208 |
+
selected_edges = tree_edges + extra_edges[:target_faces]
|
| 209 |
+
bounded_faces = target_faces
|
| 210 |
+
|
| 211 |
+
return selected_edges, bounded_faces, num_components
|
| 212 |
+
|
| 213 |
+
|
| 214 |
+
# ---------------------------------------------------------------------------
|
| 215 |
+
# Instance sampling
|
| 216 |
+
# ---------------------------------------------------------------------------
|
| 217 |
+
|
| 218 |
+
QUESTION = (
|
| 219 |
+
"How many distinct enclosed regions (bounded faces) are visible in this image? "
|
| 220 |
+
"An enclosed region is a maximal area that is fully surrounded by the drawn "
|
| 221 |
+
"line segments on every side, with no opening to the outside background. The "
|
| 222 |
+
"unbounded outside area does not count as an enclosed region. Each enclosed "
|
| 223 |
+
"region should be counted exactly once, regardless of its shape. Count every "
|
| 224 |
+
"enclosed region in the entire image and report the total as a positive integer. "
|
| 225 |
+
"Provide your final answer enclosed in <answer>...</answer> tags."
|
| 226 |
+
)
|
| 227 |
+
|
| 228 |
+
|
| 229 |
+
def sample_instance(
|
| 230 |
+
rng: random.Random,
|
| 231 |
+
width: int,
|
| 232 |
+
height: int,
|
| 233 |
+
grid_rows: int,
|
| 234 |
+
grid_cols: int,
|
| 235 |
+
min_faces: int,
|
| 236 |
+
max_faces: int,
|
| 237 |
+
num_dots_min: int,
|
| 238 |
+
num_dots_max: int,
|
| 239 |
+
min_gap: float,
|
| 240 |
+
dot_radius: float,
|
| 241 |
+
max_edge_len: float,
|
| 242 |
+
) -> Dict[str, object] | None:
|
| 243 |
+
target_faces = rng.randint(min_faces, max_faces)
|
| 244 |
+
num_dots = rng.randint(num_dots_min, num_dots_max)
|
| 245 |
+
|
| 246 |
+
dots = place_dots(rng, grid_rows, grid_cols, num_dots, min_gap)
|
| 247 |
+
if len(dots) < 10:
|
| 248 |
+
return None
|
| 249 |
+
|
| 250 |
+
planar_edges = build_planar_edge_set(dots, dot_radius, max_edge_len)
|
| 251 |
+
|
| 252 |
+
result = build_graph_with_faces(rng, len(dots), planar_edges, target_faces)
|
| 253 |
+
if result is None:
|
| 254 |
+
return None
|
| 255 |
+
|
| 256 |
+
edges, bounded_faces, num_components = result
|
| 257 |
+
|
| 258 |
+
if bounded_faces < min_faces:
|
| 259 |
+
return None
|
| 260 |
+
|
| 261 |
+
margin = int(min(width, height) * 0.10)
|
| 262 |
+
square_size = min(width, height) - 2 * margin
|
| 263 |
+
square_left = (width - square_size) / 2.0
|
| 264 |
+
square_top = (height - square_size) / 2.0
|
| 265 |
+
|
| 266 |
+
return {
|
| 267 |
+
"width": width,
|
| 268 |
+
"height": height,
|
| 269 |
+
"grid_rows": grid_rows,
|
| 270 |
+
"grid_cols": grid_cols,
|
| 271 |
+
"square_left": round(square_left, 2),
|
| 272 |
+
"square_top": round(square_top, 2),
|
| 273 |
+
"square_size": round(square_size, 2),
|
| 274 |
+
"num_dots": len(dots),
|
| 275 |
+
"num_edges": len(edges),
|
| 276 |
+
"num_components": num_components,
|
| 277 |
+
"question": QUESTION,
|
| 278 |
+
"answer": bounded_faces,
|
| 279 |
+
"dots": [[r, c] for r, c in dots],
|
| 280 |
+
"edges": [[i, j] for i, j in edges],
|
| 281 |
+
"dot_radius": dot_radius,
|
| 282 |
+
}
|
| 283 |
+
|
| 284 |
+
|
| 285 |
+
# ---------------------------------------------------------------------------
|
| 286 |
+
# Rendering (matplotlib — smooth anti-aliased output)
|
| 287 |
+
# ---------------------------------------------------------------------------
|
| 288 |
+
|
| 289 |
+
LINE_COLOR = "#2f2f2f"
|
| 290 |
+
DOT_COLOR = "#1d1916"
|
| 291 |
+
|
| 292 |
+
|
| 293 |
+
def render_instance(out_path: Path, record: Dict[str, object], noise_seed: int = 0) -> None:
|
| 294 |
+
width = int(record["width"])
|
| 295 |
+
height = int(record["height"])
|
| 296 |
+
grid_rows = int(record["grid_rows"])
|
| 297 |
+
grid_cols = int(record["grid_cols"])
|
| 298 |
+
square_left = float(record["square_left"])
|
| 299 |
+
square_top = float(record["square_top"])
|
| 300 |
+
square_size = float(record["square_size"])
|
| 301 |
+
dots: List[List[int]] = record["dots"] # type: ignore[assignment]
|
| 302 |
+
edges: List[List[int]] = record["edges"] # type: ignore[assignment]
|
| 303 |
+
dot_radius_grid = float(record["dot_radius"])
|
| 304 |
+
|
| 305 |
+
cell_w = square_size / grid_cols
|
| 306 |
+
cell_h = square_size / grid_rows
|
| 307 |
+
|
| 308 |
+
def to_pixel(r: float, c: float) -> Tuple[float, float]:
|
| 309 |
+
px = square_left + (c + 0.5) * cell_w
|
| 310 |
+
py = square_top + (r + 0.5) * cell_h
|
| 311 |
+
return px, py
|
| 312 |
+
|
| 313 |
+
pixel_dot_radius = dot_radius_grid * min(cell_w, cell_h) * 0.5
|
| 314 |
+
edge_thickness = max(1.5, pixel_dot_radius * 0.3)
|
| 315 |
+
|
| 316 |
+
fig = plt.figure(figsize=(width / 100, height / 100), dpi=100)
|
| 317 |
+
ax = fig.add_axes([0, 0, 1, 1])
|
| 318 |
+
ax.set_xlim(0, width)
|
| 319 |
+
ax.set_ylim(height, 0)
|
| 320 |
+
ax.axis("off")
|
| 321 |
+
ax.set_facecolor("#f8f6f0")
|
| 322 |
+
|
| 323 |
+
# Subtle noise background
|
| 324 |
+
nrng = np.random.default_rng(noise_seed)
|
| 325 |
+
noise = nrng.normal(0.0, 1.0, size=(height, width))
|
| 326 |
+
noise = (noise - noise.min()) / max(noise.max() - noise.min(), 1e-6)
|
| 327 |
+
ax.imshow(noise, cmap="Greys", alpha=0.05, extent=(0, width, height, 0),
|
| 328 |
+
interpolation="bilinear")
|
| 329 |
+
|
| 330 |
+
# White square background
|
| 331 |
+
ax.fill_between(
|
| 332 |
+
[square_left, square_left + square_size],
|
| 333 |
+
[square_top, square_top],
|
| 334 |
+
[square_top + square_size, square_top + square_size],
|
| 335 |
+
color="#fffdf8", zorder=0.5,
|
| 336 |
+
)
|
| 337 |
+
|
| 338 |
+
# Border
|
| 339 |
+
bx = [square_left, square_left + square_size, square_left + square_size, square_left, square_left]
|
| 340 |
+
by = [square_top, square_top, square_top + square_size, square_top + square_size, square_top]
|
| 341 |
+
ax.plot(bx, by, color="#2d2720", linewidth=2.0, solid_capstyle="round", zorder=1.0)
|
| 342 |
+
|
| 343 |
+
# Plain (v4_plain): solid edges. No dashed-line anti-shortcut.
|
| 344 |
+
for i, j in edges:
|
| 345 |
+
px1, py1 = to_pixel(dots[i][0], dots[i][1])
|
| 346 |
+
px2, py2 = to_pixel(dots[j][0], dots[j][1])
|
| 347 |
+
ax.plot([px1, px2], [py1, py2],
|
| 348 |
+
color=LINE_COLOR, linewidth=edge_thickness,
|
| 349 |
+
solid_capstyle="round", alpha=0.92, zorder=2.0)
|
| 350 |
+
|
| 351 |
+
# Dots on top
|
| 352 |
+
for r, c in dots:
|
| 353 |
+
px, py = to_pixel(r, c)
|
| 354 |
+
circle = plt.Circle((px, py), pixel_dot_radius, color=DOT_COLOR, zorder=3.0)
|
| 355 |
+
ax.add_patch(circle)
|
| 356 |
+
|
| 357 |
+
fig.savefig(out_path, dpi=100, bbox_inches="tight", pad_inches=0)
|
| 358 |
+
plt.close(fig)
|
| 359 |
+
|
| 360 |
+
|
| 361 |
+
# ---------------------------------------------------------------------------
|
| 362 |
+
# Dataset generation
|
| 363 |
+
# ---------------------------------------------------------------------------
|
| 364 |
+
|
| 365 |
+
|
| 366 |
+
def main() -> None:
|
| 367 |
+
parser = argparse.ArgumentParser()
|
| 368 |
+
parser.add_argument("--output-root", type=Path, required=True)
|
| 369 |
+
parser.add_argument("--count", type=int, default=30)
|
| 370 |
+
parser.add_argument("--seed", type=int, default=42)
|
| 371 |
+
parser.add_argument("--width", type=int, default=1024)
|
| 372 |
+
parser.add_argument("--height", type=int, default=1024)
|
| 373 |
+
parser.add_argument("--grid-rows", type=int, default=100)
|
| 374 |
+
parser.add_argument("--grid-cols", type=int, default=100)
|
| 375 |
+
parser.add_argument("--min-faces", type=int, default=4)
|
| 376 |
+
parser.add_argument("--max-faces", type=int, default=15)
|
| 377 |
+
parser.add_argument("--num-dots-min", type=int, default=60)
|
| 378 |
+
parser.add_argument("--num-dots-max", type=int, default=120)
|
| 379 |
+
parser.add_argument("--min-gap", type=float, default=5.0)
|
| 380 |
+
parser.add_argument("--dot-radius", type=float, default=1.5)
|
| 381 |
+
parser.add_argument("--max-edge-len", type=float, default=25.0)
|
| 382 |
+
parser.add_argument("--difficulty", type=int, default=5,
|
| 383 |
+
help="Integer difficulty >=0; scales faces and dot count.")
|
| 384 |
+
args = parser.parse_args()
|
| 385 |
+
|
| 386 |
+
d = max(0, int(args.difficulty))
|
| 387 |
+
# Difficulty scaling per spec
|
| 388 |
+
args.min_faces = 5
|
| 389 |
+
args.max_faces = 5 + 2 * d
|
| 390 |
+
args.num_dots_min = 10 * d
|
| 391 |
+
args.num_dots_max = 20 + 10 * d
|
| 392 |
+
base_max_edge_len = args.max_edge_len
|
| 393 |
+
args.max_edge_len = base_max_edge_len / (1.0 + 0.08 * d)
|
| 394 |
+
|
| 395 |
+
# Canvas scaling based on num_dots_max growth
|
| 396 |
+
N_d = 20 + 10 * d
|
| 397 |
+
N_0 = 20
|
| 398 |
+
s = math.sqrt(max(1.0, N_d / N_0))
|
| 399 |
+
args.width = int(round(args.width * s))
|
| 400 |
+
args.height = int(round(args.height * s))
|
| 401 |
+
|
| 402 |
+
out_root: Path = args.output_root
|
| 403 |
+
img_dir = out_root / "images"
|
| 404 |
+
img_dir.mkdir(parents=True, exist_ok=True)
|
| 405 |
+
ann_path = out_root / "annotations.jsonl"
|
| 406 |
+
|
| 407 |
+
rng = random.Random(args.seed)
|
| 408 |
+
records = []
|
| 409 |
+
# Force evenly-spaced answers across [min_faces, max_faces].
|
| 410 |
+
if args.count > 1:
|
| 411 |
+
forced_targets = [
|
| 412 |
+
int(round(args.min_faces + i * (args.max_faces - args.min_faces) / (args.count - 1)))
|
| 413 |
+
for i in range(args.count)
|
| 414 |
+
]
|
| 415 |
+
else:
|
| 416 |
+
forced_targets = [args.min_faces]
|
| 417 |
+
print(f"forced face counts: {forced_targets}")
|
| 418 |
+
with ann_path.open("w") as f:
|
| 419 |
+
for i in range(args.count):
|
| 420 |
+
sub_seed = rng.randint(0, 2**31 - 1)
|
| 421 |
+
tgt = forced_targets[i]
|
| 422 |
+
for _ in range(2000):
|
| 423 |
+
record = sample_instance(
|
| 424 |
+
rng=rng,
|
| 425 |
+
width=args.width,
|
| 426 |
+
height=args.height,
|
| 427 |
+
grid_rows=args.grid_rows,
|
| 428 |
+
grid_cols=args.grid_cols,
|
| 429 |
+
min_faces=tgt,
|
| 430 |
+
max_faces=tgt,
|
| 431 |
+
num_dots_min=args.num_dots_min,
|
| 432 |
+
num_dots_max=args.num_dots_max,
|
| 433 |
+
min_gap=args.min_gap,
|
| 434 |
+
dot_radius=args.dot_radius,
|
| 435 |
+
max_edge_len=args.max_edge_len,
|
| 436 |
+
)
|
| 437 |
+
if record is not None and record.get("answer") == tgt:
|
| 438 |
+
break
|
| 439 |
+
else:
|
| 440 |
+
print(f" [{i+1}/{args.count}] SKIP (failed to generate)")
|
| 441 |
+
continue
|
| 442 |
+
|
| 443 |
+
name = f"bounded_faces_counting_{i:05d}.png"
|
| 444 |
+
render_instance(img_dir / name, record, noise_seed=sub_seed)
|
| 445 |
+
print(f" [{i+1}/{args.count}] faces={record['answer']} dots={record['num_dots']} edges={record['num_edges']}")
|
| 446 |
+
|
| 447 |
+
rec = {
|
| 448 |
+
"image": f"images/{name}",
|
| 449 |
+
"question": QUESTION,
|
| 450 |
+
"answer": record["answer"],
|
| 451 |
+
"metadata": {
|
| 452 |
+
"bounded_faces": record["answer"],
|
| 453 |
+
"num_dots": record["num_dots"],
|
| 454 |
+
"num_edges": record["num_edges"],
|
| 455 |
+
"num_components": record["num_components"],
|
| 456 |
+
"seed": sub_seed,
|
| 457 |
+
},
|
| 458 |
+
}
|
| 459 |
+
f.write(json.dumps(rec) + "\n")
|
| 460 |
+
records.append(rec)
|
| 461 |
+
|
| 462 |
+
data_json = {
|
| 463 |
+
"task": "bounded_faces_counting",
|
| 464 |
+
"category": "distributed_scanning",
|
| 465 |
+
"count": len(records),
|
| 466 |
+
"items": records,
|
| 467 |
+
}
|
| 468 |
+
(out_root / "data.json").write_text(json.dumps(data_json, indent=2))
|
| 469 |
+
print(f"Saved to {out_root}")
|
| 470 |
+
|
| 471 |
+
|
| 472 |
+
if __name__ == "__main__":
|
| 473 |
+
main()
|
code/distributed_scanning/bounded_faces_counting/data.json
ADDED
|
@@ -0,0 +1,67 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"task": "bounded_faces_counting",
|
| 3 |
+
"category": "distributed_scanning",
|
| 4 |
+
"count": 5,
|
| 5 |
+
"items": [
|
| 6 |
+
{
|
| 7 |
+
"image": "images/bounded_faces_counting_00000.png",
|
| 8 |
+
"question": "How many distinct enclosed regions (bounded faces) are visible in this image? An enclosed region is a maximal area that is fully surrounded by drawn lines or strands on every side, with no opening to the outside background. The unbounded outside area does not count. Each enclosed region should be counted exactly once. Provide your final answer enclosed in <answer>...</answer> tags.",
|
| 9 |
+
"answer": 5,
|
| 10 |
+
"metadata": {
|
| 11 |
+
"bounded_faces": 5,
|
| 12 |
+
"num_dots": 58,
|
| 13 |
+
"num_edges": 62,
|
| 14 |
+
"num_components": 1,
|
| 15 |
+
"seed": 478163327
|
| 16 |
+
}
|
| 17 |
+
},
|
| 18 |
+
{
|
| 19 |
+
"image": "images/bounded_faces_counting_00001.png",
|
| 20 |
+
"question": "How many distinct enclosed regions (bounded faces) are visible in this image? An enclosed region is a maximal area that is fully surrounded by drawn lines or strands on every side, with no opening to the outside background. The unbounded outside area does not count. Each enclosed region should be counted exactly once. Provide your final answer enclosed in <answer>...</answer> tags.",
|
| 21 |
+
"answer": 8,
|
| 22 |
+
"metadata": {
|
| 23 |
+
"bounded_faces": 8,
|
| 24 |
+
"num_dots": 69,
|
| 25 |
+
"num_edges": 76,
|
| 26 |
+
"num_components": 1,
|
| 27 |
+
"seed": 798112150
|
| 28 |
+
}
|
| 29 |
+
},
|
| 30 |
+
{
|
| 31 |
+
"image": "images/bounded_faces_counting_00002.png",
|
| 32 |
+
"question": "How many distinct enclosed regions (bounded faces) are visible in this image? An enclosed region is a maximal area that is fully surrounded by drawn lines or strands on every side, with no opening to the outside background. The unbounded outside area does not count. Each enclosed region should be counted exactly once. Provide your final answer enclosed in <answer>...</answer> tags.",
|
| 33 |
+
"answer": 10,
|
| 34 |
+
"metadata": {
|
| 35 |
+
"bounded_faces": 10,
|
| 36 |
+
"num_dots": 68,
|
| 37 |
+
"num_edges": 77,
|
| 38 |
+
"num_components": 1,
|
| 39 |
+
"seed": 110287971
|
| 40 |
+
}
|
| 41 |
+
},
|
| 42 |
+
{
|
| 43 |
+
"image": "images/bounded_faces_counting_00003.png",
|
| 44 |
+
"question": "How many distinct enclosed regions (bounded faces) are visible in this image? An enclosed region is a maximal area that is fully surrounded by drawn lines or strands on every side, with no opening to the outside background. The unbounded outside area does not count. Each enclosed region should be counted exactly once. Provide your final answer enclosed in <answer>...</answer> tags.",
|
| 45 |
+
"answer": 12,
|
| 46 |
+
"metadata": {
|
| 47 |
+
"bounded_faces": 12,
|
| 48 |
+
"num_dots": 60,
|
| 49 |
+
"num_edges": 70,
|
| 50 |
+
"num_components": 2,
|
| 51 |
+
"seed": 1411178445
|
| 52 |
+
}
|
| 53 |
+
},
|
| 54 |
+
{
|
| 55 |
+
"image": "images/bounded_faces_counting_00004.png",
|
| 56 |
+
"question": "How many distinct enclosed regions (bounded faces) are visible in this image? An enclosed region is a maximal area that is fully surrounded by drawn lines or strands on every side, with no opening to the outside background. The unbounded outside area does not count. Each enclosed region should be counted exactly once. Provide your final answer enclosed in <answer>...</answer> tags.",
|
| 57 |
+
"answer": 15,
|
| 58 |
+
"metadata": {
|
| 59 |
+
"bounded_faces": 15,
|
| 60 |
+
"num_dots": 61,
|
| 61 |
+
"num_edges": 75,
|
| 62 |
+
"num_components": 1,
|
| 63 |
+
"seed": 963124353
|
| 64 |
+
}
|
| 65 |
+
}
|
| 66 |
+
]
|
| 67 |
+
}
|
code/distributed_scanning/counting_connected_components/creation.md
ADDED
|
@@ -0,0 +1,13 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Counting Connected Components Dataset Generation
|
| 2 |
+
|
| 3 |
+
## Goal
|
| 4 |
+
|
| 5 |
+
Generate images of a large grid (e.g., 80x80 to 150x150) where scattered dots are connected by lines to form several distinct connected components — groups of dots linked together by drawn edges.
|
| 6 |
+
|
| 7 |
+
The diameter of each dot should be about 3 unit lengths. Enforce a minimum gap of at least distance of 5 unit length between any two dots so they are unambiguously separated.
|
| 8 |
+
|
| 9 |
+
Each component does not necessary stay in the same region. Dots from different components can intersect with each other, but are connected with other dots under the same components.
|
| 10 |
+
|
| 11 |
+
The connection edge should not get across any other dots to avoid ambiguity.
|
| 12 |
+
|
| 13 |
+
Render PNG files and the question is asking how many connected components are there.
|
code/distributed_scanning/counting_connected_components/creation.py
ADDED
|
@@ -0,0 +1,574 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
from __future__ import annotations
|
| 2 |
+
|
| 3 |
+
import argparse
|
| 4 |
+
import json
|
| 5 |
+
import math
|
| 6 |
+
import random
|
| 7 |
+
from pathlib import Path
|
| 8 |
+
from typing import Dict, List, Tuple
|
| 9 |
+
|
| 10 |
+
import matplotlib
|
| 11 |
+
matplotlib.use("Agg")
|
| 12 |
+
import matplotlib.pyplot as plt
|
| 13 |
+
import numpy as np
|
| 14 |
+
|
| 15 |
+
|
| 16 |
+
Cell = Tuple[int, int]
|
| 17 |
+
|
| 18 |
+
|
| 19 |
+
# ---------------------------------------------------------------------------
|
| 20 |
+
# Geometry helpers
|
| 21 |
+
# ---------------------------------------------------------------------------
|
| 22 |
+
|
| 23 |
+
|
| 24 |
+
def _cross(ox: float, oy: float, px: float, py: float, qx: float, qy: float) -> float:
|
| 25 |
+
return (px - ox) * (qy - oy) - (py - oy) * (qx - ox)
|
| 26 |
+
|
| 27 |
+
|
| 28 |
+
def segments_intersect_properly(
|
| 29 |
+
ax: float, ay: float, bx: float, by: float,
|
| 30 |
+
cx: float, cy: float, dx: float, dy: float,
|
| 31 |
+
) -> bool:
|
| 32 |
+
"""True if segment AB *properly* crosses segment CD (shared endpoints don't count)."""
|
| 33 |
+
d1 = _cross(cx, cy, dx, dy, ax, ay)
|
| 34 |
+
d2 = _cross(cx, cy, dx, dy, bx, by)
|
| 35 |
+
d3 = _cross(ax, ay, bx, by, cx, cy)
|
| 36 |
+
d4 = _cross(ax, ay, bx, by, dx, dy)
|
| 37 |
+
if ((d1 > 0 and d2 < 0) or (d1 < 0 and d2 > 0)) and \
|
| 38 |
+
((d3 > 0 and d4 < 0) or (d3 < 0 and d4 > 0)):
|
| 39 |
+
return True
|
| 40 |
+
return False
|
| 41 |
+
|
| 42 |
+
|
| 43 |
+
def point_seg_dist(px: float, py: float, ax: float, ay: float, bx: float, by: float) -> float:
|
| 44 |
+
dx = bx - ax
|
| 45 |
+
dy = by - ay
|
| 46 |
+
len_sq = dx * dx + dy * dy
|
| 47 |
+
if len_sq < 1e-12:
|
| 48 |
+
return math.hypot(px - ax, py - ay)
|
| 49 |
+
t = max(0.0, min(1.0, ((px - ax) * dx + (py - ay) * dy) / len_sq))
|
| 50 |
+
return math.hypot(px - (ax + t * dx), py - (ay + t * dy))
|
| 51 |
+
|
| 52 |
+
|
| 53 |
+
# ---------------------------------------------------------------------------
|
| 54 |
+
# Union-Find
|
| 55 |
+
# ---------------------------------------------------------------------------
|
| 56 |
+
|
| 57 |
+
|
| 58 |
+
class UnionFind:
|
| 59 |
+
def __init__(self, n: int) -> None:
|
| 60 |
+
self.parent = list(range(n))
|
| 61 |
+
self.rank = [0] * n
|
| 62 |
+
self.size = [1] * n
|
| 63 |
+
self.num_sets = n
|
| 64 |
+
|
| 65 |
+
def find(self, x: int) -> int:
|
| 66 |
+
while self.parent[x] != x:
|
| 67 |
+
self.parent[x] = self.parent[self.parent[x]]
|
| 68 |
+
x = self.parent[x]
|
| 69 |
+
return x
|
| 70 |
+
|
| 71 |
+
def set_size(self, x: int) -> int:
|
| 72 |
+
return self.size[self.find(x)]
|
| 73 |
+
|
| 74 |
+
def union(self, a: int, b: int) -> bool:
|
| 75 |
+
ra, rb = self.find(a), self.find(b)
|
| 76 |
+
if ra == rb:
|
| 77 |
+
return False
|
| 78 |
+
if self.rank[ra] < self.rank[rb]:
|
| 79 |
+
ra, rb = rb, ra
|
| 80 |
+
self.parent[rb] = ra
|
| 81 |
+
self.size[ra] += self.size[rb]
|
| 82 |
+
if self.rank[ra] == self.rank[rb]:
|
| 83 |
+
self.rank[ra] += 1
|
| 84 |
+
self.num_sets -= 1
|
| 85 |
+
return True
|
| 86 |
+
|
| 87 |
+
|
| 88 |
+
# ---------------------------------------------------------------------------
|
| 89 |
+
# Graph construction — planar, no-dot-crossing edge set
|
| 90 |
+
# ---------------------------------------------------------------------------
|
| 91 |
+
|
| 92 |
+
|
| 93 |
+
def place_dots(
|
| 94 |
+
rng: random.Random,
|
| 95 |
+
grid_rows: int,
|
| 96 |
+
grid_cols: int,
|
| 97 |
+
num_dots: int,
|
| 98 |
+
min_gap: float,
|
| 99 |
+
border_margin: int = 5,
|
| 100 |
+
max_attempts: int = 8000,
|
| 101 |
+
) -> List[Cell]:
|
| 102 |
+
cells: List[Cell] = []
|
| 103 |
+
lo_r, hi_r = border_margin, grid_rows - border_margin
|
| 104 |
+
lo_c, hi_c = border_margin, grid_cols - border_margin
|
| 105 |
+
for _ in range(max_attempts):
|
| 106 |
+
if len(cells) == num_dots:
|
| 107 |
+
break
|
| 108 |
+
r = rng.randint(lo_r, hi_r - 1)
|
| 109 |
+
c = rng.randint(lo_c, hi_c - 1)
|
| 110 |
+
if all(math.hypot(r - er, c - ec) >= min_gap for er, ec in cells):
|
| 111 |
+
cells.append((r, c))
|
| 112 |
+
return cells
|
| 113 |
+
|
| 114 |
+
|
| 115 |
+
def build_planar_edge_set(
|
| 116 |
+
dots: List[Cell],
|
| 117 |
+
dot_radius: float,
|
| 118 |
+
max_edge_len: float,
|
| 119 |
+
) -> List[Tuple[int, int, float]]:
|
| 120 |
+
"""
|
| 121 |
+
Build a set of edges that:
|
| 122 |
+
1. Are shorter than max_edge_len
|
| 123 |
+
2. Don't pass within dot_radius of any other dot
|
| 124 |
+
3. Don't cross each other (planar)
|
| 125 |
+
Returns list of (i, j, dist) sorted by dist, with the planar filter applied.
|
| 126 |
+
"""
|
| 127 |
+
n = len(dots)
|
| 128 |
+
|
| 129 |
+
# Step 1: candidate edges sorted by length, filtered by dot clearance
|
| 130 |
+
candidates: List[Tuple[float, int, int]] = []
|
| 131 |
+
for i in range(n):
|
| 132 |
+
ri, ci = dots[i]
|
| 133 |
+
for j in range(i + 1, n):
|
| 134 |
+
rj, cj = dots[j]
|
| 135 |
+
d = math.hypot(ri - rj, ci - cj)
|
| 136 |
+
if d > max_edge_len:
|
| 137 |
+
continue
|
| 138 |
+
# Check this edge doesn't pass near any other dot
|
| 139 |
+
clear = True
|
| 140 |
+
for k in range(n):
|
| 141 |
+
if k == i or k == j:
|
| 142 |
+
continue
|
| 143 |
+
if point_seg_dist(dots[k][0], dots[k][1], ri, ci, rj, cj) < dot_radius + 0.8:
|
| 144 |
+
clear = False
|
| 145 |
+
break
|
| 146 |
+
if clear:
|
| 147 |
+
candidates.append((d, i, j))
|
| 148 |
+
candidates.sort()
|
| 149 |
+
|
| 150 |
+
# Step 2: greedily add edges, skip if they cross an already-added edge
|
| 151 |
+
accepted: List[Tuple[int, int, float]] = []
|
| 152 |
+
# For fast crossing checks, store segment coords
|
| 153 |
+
seg_coords: List[Tuple[float, float, float, float]] = []
|
| 154 |
+
|
| 155 |
+
for dist, i, j in candidates:
|
| 156 |
+
ri, ci = dots[i]
|
| 157 |
+
rj, cj = dots[j]
|
| 158 |
+
crosses = False
|
| 159 |
+
for ax, ay, bx, by in seg_coords:
|
| 160 |
+
# Skip if shared endpoint
|
| 161 |
+
if (ri == ax and ci == ay) or (ri == bx and ci == by) or \
|
| 162 |
+
(rj == ax and cj == ay) or (rj == bx and cj == by):
|
| 163 |
+
continue
|
| 164 |
+
if segments_intersect_properly(ri, ci, rj, cj, ax, ay, bx, by):
|
| 165 |
+
crosses = True
|
| 166 |
+
break
|
| 167 |
+
if not crosses:
|
| 168 |
+
accepted.append((i, j, dist))
|
| 169 |
+
seg_coords.append((float(ri), float(ci), float(rj), float(cj)))
|
| 170 |
+
|
| 171 |
+
return accepted
|
| 172 |
+
|
| 173 |
+
|
| 174 |
+
def build_spanning_forest(
|
| 175 |
+
rng: random.Random,
|
| 176 |
+
n: int,
|
| 177 |
+
planar_edges: List[Tuple[int, int, float]],
|
| 178 |
+
num_components: int,
|
| 179 |
+
) -> Tuple[List[Tuple[int, int]], List[List[int]]]:
|
| 180 |
+
"""
|
| 181 |
+
From the planar edge set, build a spanning forest with exactly
|
| 182 |
+
num_components trees using Union-Find.
|
| 183 |
+
|
| 184 |
+
Strategy: shuffle edges, greedily merge until we reach the target
|
| 185 |
+
number of components. Then collect extra intra-component edges.
|
| 186 |
+
"""
|
| 187 |
+
uf = UnionFind(n)
|
| 188 |
+
|
| 189 |
+
# We need to reduce n sets down to num_components, so we need n - num_components merges.
|
| 190 |
+
target_merges = n - num_components
|
| 191 |
+
|
| 192 |
+
# Cap: no single component should exceed ~2x the ideal even share.
|
| 193 |
+
max_component_size = max(4, (n // num_components) * 2)
|
| 194 |
+
|
| 195 |
+
# Shuffle edges but bias toward shorter ones: split into short/long halves,
|
| 196 |
+
# shuffle each, concatenate.
|
| 197 |
+
mid = len(planar_edges) // 2
|
| 198 |
+
short = list(planar_edges[:mid])
|
| 199 |
+
long = list(planar_edges[mid:])
|
| 200 |
+
rng.shuffle(short)
|
| 201 |
+
rng.shuffle(long)
|
| 202 |
+
shuffled = short + long
|
| 203 |
+
|
| 204 |
+
tree_edges: List[Tuple[int, int]] = []
|
| 205 |
+
deferred: List[Tuple[int, int, float]] = []
|
| 206 |
+
extra_edges: List[Tuple[int, int]] = []
|
| 207 |
+
|
| 208 |
+
for i, j, d in shuffled:
|
| 209 |
+
if uf.find(i) != uf.find(j):
|
| 210 |
+
if len(tree_edges) < target_merges:
|
| 211 |
+
merged_size = uf.set_size(i) + uf.set_size(j)
|
| 212 |
+
if merged_size <= max_component_size:
|
| 213 |
+
uf.union(i, j)
|
| 214 |
+
tree_edges.append((i, j))
|
| 215 |
+
else:
|
| 216 |
+
deferred.append((i, j, d))
|
| 217 |
+
else:
|
| 218 |
+
extra_edges.append((i, j))
|
| 219 |
+
else:
|
| 220 |
+
extra_edges.append((i, j))
|
| 221 |
+
|
| 222 |
+
# Second pass: use deferred edges if we still need merges
|
| 223 |
+
for i, j, _d in deferred:
|
| 224 |
+
if len(tree_edges) >= target_merges:
|
| 225 |
+
break
|
| 226 |
+
if uf.find(i) != uf.find(j):
|
| 227 |
+
uf.union(i, j)
|
| 228 |
+
tree_edges.append((i, j))
|
| 229 |
+
|
| 230 |
+
# Add some extra intra-component edges for visual richness
|
| 231 |
+
# Only pick edges where both endpoints are already in the same component
|
| 232 |
+
intra_edges = [(i, j) for i, j in extra_edges if uf.find(i) == uf.find(j)]
|
| 233 |
+
rng.shuffle(intra_edges)
|
| 234 |
+
bonus = min(len(intra_edges), max(n // 6, 3))
|
| 235 |
+
tree_edges.extend(intra_edges[:bonus])
|
| 236 |
+
|
| 237 |
+
# Build component membership
|
| 238 |
+
comp_map: Dict[int, List[int]] = {}
|
| 239 |
+
for node in range(n):
|
| 240 |
+
root = uf.find(node)
|
| 241 |
+
comp_map.setdefault(root, []).append(node)
|
| 242 |
+
components = list(comp_map.values())
|
| 243 |
+
|
| 244 |
+
return tree_edges, components
|
| 245 |
+
|
| 246 |
+
|
| 247 |
+
# ---------------------------------------------------------------------------
|
| 248 |
+
# Instance sampling
|
| 249 |
+
# ---------------------------------------------------------------------------
|
| 250 |
+
|
| 251 |
+
|
| 252 |
+
def sample_instance(
|
| 253 |
+
rng: random.Random,
|
| 254 |
+
width: int,
|
| 255 |
+
height: int,
|
| 256 |
+
grid_rows: int,
|
| 257 |
+
grid_cols: int,
|
| 258 |
+
min_components: int,
|
| 259 |
+
max_components: int,
|
| 260 |
+
num_dots_min: int,
|
| 261 |
+
num_dots_max: int,
|
| 262 |
+
min_gap: float,
|
| 263 |
+
dot_radius: float,
|
| 264 |
+
max_edge_len: float,
|
| 265 |
+
close_strand_tolerance: float = 0.0,
|
| 266 |
+
) -> Dict[str, object] | None:
|
| 267 |
+
num_components = rng.randint(min_components, max_components)
|
| 268 |
+
num_dots = rng.randint(max(num_dots_min, num_components * 2), num_dots_max)
|
| 269 |
+
|
| 270 |
+
dots = place_dots(rng, grid_rows, grid_cols, num_dots, min_gap)
|
| 271 |
+
if len(dots) < num_components * 2:
|
| 272 |
+
return None
|
| 273 |
+
|
| 274 |
+
planar_edges = build_planar_edge_set(dots, dot_radius, max_edge_len)
|
| 275 |
+
|
| 276 |
+
# Check we have enough edges to connect dots into num_components trees
|
| 277 |
+
# (need at least len(dots) - num_components edges in a spanning forest)
|
| 278 |
+
if len(planar_edges) < len(dots) - num_components:
|
| 279 |
+
return None
|
| 280 |
+
|
| 281 |
+
edges, components = build_spanning_forest(rng, len(dots), planar_edges, num_components)
|
| 282 |
+
|
| 283 |
+
# Reject if any component has fewer than 2 dots
|
| 284 |
+
if any(len(c) < 2 for c in components):
|
| 285 |
+
return None
|
| 286 |
+
|
| 287 |
+
actual_components = len(components)
|
| 288 |
+
|
| 289 |
+
if actual_components < min_components:
|
| 290 |
+
return None
|
| 291 |
+
|
| 292 |
+
# Enforce close_strand_tolerance: dots from different components must be
|
| 293 |
+
# at least this many grid units apart (visual separation).
|
| 294 |
+
if close_strand_tolerance > 0:
|
| 295 |
+
node_to_comp = {}
|
| 296 |
+
for ci, comp in enumerate(components):
|
| 297 |
+
for node in comp:
|
| 298 |
+
node_to_comp[node] = ci
|
| 299 |
+
for a in range(len(dots)):
|
| 300 |
+
ra, ca = dots[a]
|
| 301 |
+
for b in range(a + 1, len(dots)):
|
| 302 |
+
if node_to_comp[a] == node_to_comp[b]:
|
| 303 |
+
continue
|
| 304 |
+
rb, cb = dots[b]
|
| 305 |
+
if math.hypot(ra - rb, ca - cb) < close_strand_tolerance:
|
| 306 |
+
return None
|
| 307 |
+
|
| 308 |
+
margin = int(min(width, height) * 0.10)
|
| 309 |
+
square_size = min(width, height) - 2 * margin
|
| 310 |
+
square_left = (width - square_size) / 2.0
|
| 311 |
+
square_top = (height - square_size) / 2.0
|
| 312 |
+
|
| 313 |
+
return {
|
| 314 |
+
"width": width,
|
| 315 |
+
"height": height,
|
| 316 |
+
"grid_rows": grid_rows,
|
| 317 |
+
"grid_cols": grid_cols,
|
| 318 |
+
"square_left": round(square_left, 2),
|
| 319 |
+
"square_top": round(square_top, 2),
|
| 320 |
+
"square_size": round(square_size, 2),
|
| 321 |
+
"num_components": actual_components,
|
| 322 |
+
"num_dots": len(dots),
|
| 323 |
+
"question": (
|
| 324 |
+
"How many connected components are there in the image? "
|
| 325 |
+
"A connected component is a maximal group of dots such that any two dots "
|
| 326 |
+
"in the group are linked by a path of one or more drawn line segments "
|
| 327 |
+
"(directly or through other dots in the same group). "
|
| 328 |
+
"Every component contains at least two dots. "
|
| 329 |
+
"Two dots that are not linked by any chain of line segments "
|
| 330 |
+
"belong to different components, even if they appear visually close. "
|
| 331 |
+
"Count every connected component and report the total. "
|
| 332 |
+
"Provide your final answer enclosed in <answer>...</answer> tags."
|
| 333 |
+
),
|
| 334 |
+
"answer": actual_components,
|
| 335 |
+
"dots": [[r, c] for r, c in dots],
|
| 336 |
+
"components": components,
|
| 337 |
+
"edges": [[i, j] for i, j in edges],
|
| 338 |
+
"dot_radius": dot_radius,
|
| 339 |
+
}
|
| 340 |
+
|
| 341 |
+
|
| 342 |
+
# ---------------------------------------------------------------------------
|
| 343 |
+
# Rendering (matplotlib — smooth anti-aliased output)
|
| 344 |
+
# ---------------------------------------------------------------------------
|
| 345 |
+
|
| 346 |
+
LINE_COLOR = "#2f2f2f"
|
| 347 |
+
DOT_COLOR = "#1d1916"
|
| 348 |
+
|
| 349 |
+
|
| 350 |
+
def render_instance(out_path: Path, record: Dict[str, object], noise_seed: int = 0) -> None:
|
| 351 |
+
width = int(record["width"])
|
| 352 |
+
height = int(record["height"])
|
| 353 |
+
grid_rows = int(record["grid_rows"])
|
| 354 |
+
grid_cols = int(record["grid_cols"])
|
| 355 |
+
square_left = float(record["square_left"])
|
| 356 |
+
square_top = float(record["square_top"])
|
| 357 |
+
square_size = float(record["square_size"])
|
| 358 |
+
dots: List[List[int]] = record["dots"] # type: ignore[assignment]
|
| 359 |
+
edges: List[List[int]] = record["edges"] # type: ignore[assignment]
|
| 360 |
+
dot_radius_grid = float(record["dot_radius"])
|
| 361 |
+
|
| 362 |
+
cell_w = square_size / grid_cols
|
| 363 |
+
cell_h = square_size / grid_rows
|
| 364 |
+
|
| 365 |
+
def to_pixel(r: float, c: float) -> Tuple[float, float]:
|
| 366 |
+
px = square_left + (c + 0.5) * cell_w
|
| 367 |
+
py = square_top + (r + 0.5) * cell_h
|
| 368 |
+
return px, py
|
| 369 |
+
|
| 370 |
+
pixel_dot_radius = dot_radius_grid * min(cell_w, cell_h) * 0.5
|
| 371 |
+
edge_thickness = max(1.5, pixel_dot_radius * 0.3)
|
| 372 |
+
|
| 373 |
+
fig = plt.figure(figsize=(width / 100, height / 100), dpi=100)
|
| 374 |
+
ax = fig.add_axes([0, 0, 1, 1])
|
| 375 |
+
ax.set_xlim(0, width)
|
| 376 |
+
ax.set_ylim(height, 0)
|
| 377 |
+
ax.axis("off")
|
| 378 |
+
ax.set_facecolor("#f8f6f0")
|
| 379 |
+
|
| 380 |
+
# Subtle noise background
|
| 381 |
+
nrng = np.random.default_rng(noise_seed)
|
| 382 |
+
noise = nrng.normal(0.0, 1.0, size=(height, width))
|
| 383 |
+
noise = (noise - noise.min()) / max(noise.max() - noise.min(), 1e-6)
|
| 384 |
+
ax.imshow(noise, cmap="Greys", alpha=0.05, extent=(0, width, height, 0),
|
| 385 |
+
interpolation="bilinear")
|
| 386 |
+
|
| 387 |
+
# White square background
|
| 388 |
+
ax.fill_between(
|
| 389 |
+
[square_left, square_left + square_size],
|
| 390 |
+
[square_top, square_top],
|
| 391 |
+
[square_top + square_size, square_top + square_size],
|
| 392 |
+
color="#fffdf8", zorder=0.5,
|
| 393 |
+
)
|
| 394 |
+
|
| 395 |
+
# Border
|
| 396 |
+
border_lw = 2.0
|
| 397 |
+
bx = [square_left, square_left + square_size, square_left + square_size, square_left, square_left]
|
| 398 |
+
by = [square_top, square_top, square_top + square_size, square_top + square_size, square_top]
|
| 399 |
+
ax.plot(bx, by, color="#2d2720", linewidth=border_lw, solid_capstyle="round", zorder=1.0)
|
| 400 |
+
|
| 401 |
+
# Plain (v4_plain): solid edges. No dashed-line anti-shortcut.
|
| 402 |
+
for i, j in edges:
|
| 403 |
+
px1, py1 = to_pixel(dots[i][0], dots[i][1])
|
| 404 |
+
px2, py2 = to_pixel(dots[j][0], dots[j][1])
|
| 405 |
+
ax.plot([px1, px2], [py1, py2],
|
| 406 |
+
color=LINE_COLOR, linewidth=edge_thickness,
|
| 407 |
+
solid_capstyle="round", alpha=0.92, zorder=2.0)
|
| 408 |
+
|
| 409 |
+
# Dots on top
|
| 410 |
+
for r, c in dots:
|
| 411 |
+
px, py = to_pixel(r, c)
|
| 412 |
+
circle = plt.Circle((px, py), pixel_dot_radius, color=DOT_COLOR, zorder=3.0)
|
| 413 |
+
ax.add_patch(circle)
|
| 414 |
+
|
| 415 |
+
fig.savefig(out_path, dpi=100, bbox_inches="tight", pad_inches=0)
|
| 416 |
+
plt.close(fig)
|
| 417 |
+
|
| 418 |
+
|
| 419 |
+
# ---------------------------------------------------------------------------
|
| 420 |
+
# Dataset generation
|
| 421 |
+
# ---------------------------------------------------------------------------
|
| 422 |
+
|
| 423 |
+
|
| 424 |
+
def ensure_output_dir(root: Path) -> Tuple[Path, Path]:
|
| 425 |
+
root.mkdir(parents=True, exist_ok=True)
|
| 426 |
+
images_dir = root / "images"
|
| 427 |
+
images_dir.mkdir(exist_ok=True)
|
| 428 |
+
return root, images_dir
|
| 429 |
+
|
| 430 |
+
|
| 431 |
+
def generate_dataset(
|
| 432 |
+
rng: random.Random,
|
| 433 |
+
count: int,
|
| 434 |
+
output_dir: Path,
|
| 435 |
+
images_dir: Path,
|
| 436 |
+
width: int,
|
| 437 |
+
height: int,
|
| 438 |
+
grid_rows: int,
|
| 439 |
+
grid_cols: int,
|
| 440 |
+
min_components: int,
|
| 441 |
+
max_components: int,
|
| 442 |
+
num_dots_min: int,
|
| 443 |
+
num_dots_max: int,
|
| 444 |
+
min_gap: float,
|
| 445 |
+
dot_radius: float,
|
| 446 |
+
max_edge_len: float,
|
| 447 |
+
close_strand_tolerance: float = 0.0,
|
| 448 |
+
) -> None:
|
| 449 |
+
records: List[Dict[str, object]] = []
|
| 450 |
+
data_records: List[Dict[str, object]] = []
|
| 451 |
+
|
| 452 |
+
# Force evenly-spaced answers across [min_components, max_components].
|
| 453 |
+
if count > 1:
|
| 454 |
+
forced_targets = [
|
| 455 |
+
int(round(min_components + i * (max_components - min_components) / (count - 1)))
|
| 456 |
+
for i in range(count)
|
| 457 |
+
]
|
| 458 |
+
else:
|
| 459 |
+
forced_targets = [min_components]
|
| 460 |
+
print(f"forced component counts: {forced_targets}")
|
| 461 |
+
|
| 462 |
+
for idx in range(count):
|
| 463 |
+
sub_seed = rng.randint(0, 2**31 - 1)
|
| 464 |
+
tgt = forced_targets[idx]
|
| 465 |
+
for _ in range(2000):
|
| 466 |
+
record = sample_instance(
|
| 467 |
+
rng=rng,
|
| 468 |
+
width=width,
|
| 469 |
+
height=height,
|
| 470 |
+
grid_rows=grid_rows,
|
| 471 |
+
grid_cols=grid_cols,
|
| 472 |
+
min_components=tgt,
|
| 473 |
+
max_components=tgt,
|
| 474 |
+
num_dots_min=num_dots_min,
|
| 475 |
+
num_dots_max=num_dots_max,
|
| 476 |
+
min_gap=min_gap,
|
| 477 |
+
dot_radius=dot_radius,
|
| 478 |
+
max_edge_len=max_edge_len,
|
| 479 |
+
close_strand_tolerance=close_strand_tolerance,
|
| 480 |
+
)
|
| 481 |
+
if record is not None and record.get("answer") == tgt:
|
| 482 |
+
break
|
| 483 |
+
else:
|
| 484 |
+
print(f"Warning: could not generate sample {idx}, skipping")
|
| 485 |
+
continue
|
| 486 |
+
|
| 487 |
+
image_name = f"counting_connected_components_{idx:05d}.png"
|
| 488 |
+
render_instance(images_dir / image_name, record, noise_seed=sub_seed)
|
| 489 |
+
record["image"] = f"images/{image_name}"
|
| 490 |
+
records.append(record)
|
| 491 |
+
data_records.append({
|
| 492 |
+
"image": record["image"],
|
| 493 |
+
"question": record["question"],
|
| 494 |
+
"answer": record["answer"],
|
| 495 |
+
})
|
| 496 |
+
print(f" [{idx+1}/{count}] components={record['answer']} dots={record['num_dots']}")
|
| 497 |
+
|
| 498 |
+
with (output_dir / "annotations.jsonl").open("w", encoding="utf-8") as fh:
|
| 499 |
+
for record in records:
|
| 500 |
+
fh.write(json.dumps(record) + "\n")
|
| 501 |
+
|
| 502 |
+
data_json = {
|
| 503 |
+
"task": "counting_connected_components",
|
| 504 |
+
"category": "distributed_scanning",
|
| 505 |
+
"count": len(data_records),
|
| 506 |
+
"items": data_records,
|
| 507 |
+
}
|
| 508 |
+
with (output_dir / "data.json").open("w", encoding="utf-8") as fh:
|
| 509 |
+
json.dump(data_json, fh, indent=2)
|
| 510 |
+
|
| 511 |
+
|
| 512 |
+
def parse_args() -> argparse.Namespace:
|
| 513 |
+
parser = argparse.ArgumentParser(description="Generate a counting-connected-components dataset.")
|
| 514 |
+
parser.add_argument("--output-root", type=Path, required=True, help="Dataset root directory.")
|
| 515 |
+
parser.add_argument("--count", type=int, default=30)
|
| 516 |
+
parser.add_argument("--width", type=int, default=1024)
|
| 517 |
+
parser.add_argument("--height", type=int, default=1024)
|
| 518 |
+
parser.add_argument("--grid-rows", type=int, default=100)
|
| 519 |
+
parser.add_argument("--grid-cols", type=int, default=100)
|
| 520 |
+
parser.add_argument("--min-components", type=int, default=4)
|
| 521 |
+
parser.add_argument("--max-components", type=int, default=12)
|
| 522 |
+
parser.add_argument("--num-dots-min", type=int, default=60)
|
| 523 |
+
parser.add_argument("--num-dots-max", type=int, default=120)
|
| 524 |
+
parser.add_argument("--min-gap", type=float, default=5.0)
|
| 525 |
+
parser.add_argument("--dot-radius", type=float, default=1.5)
|
| 526 |
+
parser.add_argument("--max-edge-len", type=float, default=25.0)
|
| 527 |
+
parser.add_argument("--seed", type=int, default=42)
|
| 528 |
+
parser.add_argument("--difficulty", type=int, default=5,
|
| 529 |
+
help="Integer difficulty >=0; scales components and dot count.")
|
| 530 |
+
return parser.parse_args()
|
| 531 |
+
|
| 532 |
+
|
| 533 |
+
def main() -> None:
|
| 534 |
+
args = parse_args()
|
| 535 |
+
rng = random.Random(args.seed)
|
| 536 |
+
output_dir, images_dir = ensure_output_dir(args.output_root)
|
| 537 |
+
d = max(0, int(args.difficulty))
|
| 538 |
+
# Difficulty scaling per spec
|
| 539 |
+
min_components = 10
|
| 540 |
+
max_components = 10 + 2 * d
|
| 541 |
+
num_dots_min = 40
|
| 542 |
+
num_dots_max = 40 + 20 * d
|
| 543 |
+
close_strand_tolerance = float(max(3, 8 - d))
|
| 544 |
+
|
| 545 |
+
# Canvas scaling based on num_dots_max growth
|
| 546 |
+
N_d = 20 + 10 * d
|
| 547 |
+
N_0 = 20
|
| 548 |
+
s = math.sqrt(max(1.0, N_d / N_0))
|
| 549 |
+
args.width = int(round(args.width * s))
|
| 550 |
+
args.height = int(round(args.height * s))
|
| 551 |
+
|
| 552 |
+
generate_dataset(
|
| 553 |
+
rng=rng,
|
| 554 |
+
count=args.count,
|
| 555 |
+
output_dir=output_dir,
|
| 556 |
+
images_dir=images_dir,
|
| 557 |
+
width=args.width,
|
| 558 |
+
height=args.height,
|
| 559 |
+
grid_rows=args.grid_rows,
|
| 560 |
+
grid_cols=args.grid_cols,
|
| 561 |
+
min_components=min_components,
|
| 562 |
+
max_components=max_components,
|
| 563 |
+
num_dots_min=num_dots_min,
|
| 564 |
+
num_dots_max=num_dots_max,
|
| 565 |
+
min_gap=args.min_gap,
|
| 566 |
+
dot_radius=args.dot_radius,
|
| 567 |
+
max_edge_len=args.max_edge_len,
|
| 568 |
+
close_strand_tolerance=close_strand_tolerance,
|
| 569 |
+
)
|
| 570 |
+
print(f"Saved dataset to {args.output_root}")
|
| 571 |
+
|
| 572 |
+
|
| 573 |
+
if __name__ == "__main__":
|
| 574 |
+
main()
|
code/distributed_scanning/counting_connected_components/data.json
ADDED
|
@@ -0,0 +1,32 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"task": "counting_connected_components",
|
| 3 |
+
"category": "distributed_scanning",
|
| 4 |
+
"count": 5,
|
| 5 |
+
"items": [
|
| 6 |
+
{
|
| 7 |
+
"image": "images/counting_connected_components_00000.png",
|
| 8 |
+
"question": "How many connected components are there in the image? A connected component is a maximal group of anchor points such that any two are linked by a path of one or more strands (directly or through other anchors in the same group). Anchors not linked by any chain belong to different components, even if they appear visually close. Count every connected component and report the total. Provide your final answer enclosed in <answer>...</answer> tags.",
|
| 9 |
+
"answer": 10
|
| 10 |
+
},
|
| 11 |
+
{
|
| 12 |
+
"image": "images/counting_connected_components_00001.png",
|
| 13 |
+
"question": "How many connected components are there in the image? A connected component is a maximal group of anchor points such that any two are linked by a path of one or more strands (directly or through other anchors in the same group). Anchors not linked by any chain belong to different components, even if they appear visually close. Count every connected component and report the total. Provide your final answer enclosed in <answer>...</answer> tags.",
|
| 14 |
+
"answer": 12
|
| 15 |
+
},
|
| 16 |
+
{
|
| 17 |
+
"image": "images/counting_connected_components_00002.png",
|
| 18 |
+
"question": "How many connected components are there in the image? A connected component is a maximal group of anchor points such that any two are linked by a path of one or more strands (directly or through other anchors in the same group). Anchors not linked by any chain belong to different components, even if they appear visually close. Count every connected component and report the total. Provide your final answer enclosed in <answer>...</answer> tags.",
|
| 19 |
+
"answer": 15
|
| 20 |
+
},
|
| 21 |
+
{
|
| 22 |
+
"image": "images/counting_connected_components_00003.png",
|
| 23 |
+
"question": "How many connected components are there in the image? A connected component is a maximal group of anchor points such that any two are linked by a path of one or more strands (directly or through other anchors in the same group). Anchors not linked by any chain belong to different components, even if they appear visually close. Count every connected component and report the total. Provide your final answer enclosed in <answer>...</answer> tags.",
|
| 24 |
+
"answer": 18
|
| 25 |
+
},
|
| 26 |
+
{
|
| 27 |
+
"image": "images/counting_connected_components_00004.png",
|
| 28 |
+
"question": "How many connected components are there in the image? A connected component is a maximal group of anchor points such that any two are linked by a path of one or more strands (directly or through other anchors in the same group). Anchors not linked by any chain belong to different components, even if they appear visually close. Count every connected component and report the total. Provide your final answer enclosed in <answer>...</answer> tags.",
|
| 29 |
+
"answer": 20
|
| 30 |
+
}
|
| 31 |
+
]
|
| 32 |
+
}
|
code/distributed_scanning/counting_regions/creation.md
ADDED
|
@@ -0,0 +1,170 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Counting Regions Dataset Generation
|
| 2 |
+
|
| 3 |
+
## Goal
|
| 4 |
+
|
| 5 |
+
Create tasks where the model counts separated regions in an image. Each
|
| 6 |
+
region inside the central square is filled with a distinctive
|
| 7 |
+
gradient + texture combination, and adjacent regions are guaranteed to
|
| 8 |
+
look visually different so a human can read them apart.
|
| 9 |
+
|
| 10 |
+
The image always contains a square in the central area. Inside the
|
| 11 |
+
square the canvas is partitioned into irregular regions; outside the
|
| 12 |
+
square stays plain.
|
| 13 |
+
|
| 14 |
+
## Core Task
|
| 15 |
+
|
| 16 |
+
Given an image containing one central square, answer:
|
| 17 |
+
|
| 18 |
+
- `How many separated regions are inside the square?`
|
| 19 |
+
|
| 20 |
+
The answer is a positive integer.
|
| 21 |
+
|
| 22 |
+
## Visual Structure
|
| 23 |
+
|
| 24 |
+
- Single square placed near the centre of the canvas with a thin dark
|
| 25 |
+
border. Everything outside is a plain off-white background.
|
| 26 |
+
- Inside the square the area is partitioned into irregular regions.
|
| 27 |
+
- Each region is rendered with two cues at once:
|
| 28 |
+
1. A linear two-colour gradient. The two endpoint colours are drawn
|
| 29 |
+
from a fixed palette but each region perturbs them in HSV space
|
| 30 |
+
so no two regions render identical colours globally — colour
|
| 31 |
+
histograms cannot recover the count.
|
| 32 |
+
2. A texture pattern (speckle, horizontal / vertical / diagonal
|
| 33 |
+
stripes, dots, or perlin-like band) modulated multiplicatively on
|
| 34 |
+
top of the gradient. Adjacent regions are forced to use different
|
| 35 |
+
texture styles where possible.
|
| 36 |
+
- Adjacency constraints: for every pair of touching regions, their
|
| 37 |
+
unordered colour pairs differ AND there is at least ~55° of hue
|
| 38 |
+
separation at the closest pairing. This guarantees a human-readable
|
| 39 |
+
contrast across every shared boundary.
|
| 40 |
+
- A low-amplitude global speckle is added across the whole canvas so
|
| 41 |
+
Canny / Sobel edge detectors fire uniformly inside regions and not
|
| 42 |
+
only at boundaries — this defeats the simple "boundary-detect →
|
| 43 |
+
flood-fill" attack.
|
| 44 |
+
- No drawn boundary lines. Region discrimination relies on colour
|
| 45 |
+
discontinuity and texture-style change, not strokes.
|
| 46 |
+
|
| 47 |
+
Recommended defaults:
|
| 48 |
+
|
| 49 |
+
- Image canvas `1024×1024`
|
| 50 |
+
- Painted square `820×820` centred
|
| 51 |
+
- Region-synthesis grid `50×50` (low resolution → upsampled with smooth
|
| 52 |
+
per-region masks)
|
| 53 |
+
- Number of final regions in the range `6-12`
|
| 54 |
+
|
| 55 |
+
## Generation Procedure
|
| 56 |
+
|
| 57 |
+
### 1. Sample target count and partition
|
| 58 |
+
|
| 59 |
+
- Sample target `K` in `[min_regions, max_regions]`.
|
| 60 |
+
- Choose `K` spaced seed cells on a 50×50 lattice.
|
| 61 |
+
- Grow connected regions from the seeds via priority-queue expansion,
|
| 62 |
+
weighted by per-region noise fields so boundaries are organic.
|
| 63 |
+
|
| 64 |
+
### 2. Upsample and clean
|
| 65 |
+
|
| 66 |
+
- Upsample the label map from 50×50 to canvas resolution by
|
| 67 |
+
per-region soft-mask interpolation + argmax (smooth boundaries
|
| 68 |
+
without aliasing).
|
| 69 |
+
- Absorb tiny connected-component slivers (< 0.2 % of canvas) into
|
| 70 |
+
their dominant neighbour label.
|
| 71 |
+
- Relabel region ids contiguously after cleanup.
|
| 72 |
+
- **Reject** the sample if the smallest remaining region covers less
|
| 73 |
+
than `min_region_frac` (default 2.5 %) of the canvas — this avoids
|
| 74 |
+
tiny near-invisible regions slipping through.
|
| 75 |
+
|
| 76 |
+
### 3. Assign gradient + texture per region
|
| 77 |
+
|
| 78 |
+
- Build the region adjacency graph at canvas resolution.
|
| 79 |
+
- Greedy assignment over regions (descending degree first):
|
| 80 |
+
- Pick `(colour_a, colour_b)` from the palette such that no
|
| 81 |
+
neighbour shares the same unordered pair AND `pair_hue_gap` to
|
| 82 |
+
every neighbour is at least `hue_gap_min` (default 55°).
|
| 83 |
+
- Apply per-region HSV jitter (≈±18° hue, ±0.12 sat, ±0.10 val) to
|
| 84 |
+
the two endpoints.
|
| 85 |
+
- Pick a random gradient angle.
|
| 86 |
+
- Assign a texture style + parameters per region, preferring styles
|
| 87 |
+
not used by already-assigned neighbours.
|
| 88 |
+
|
| 89 |
+
### 4. Render
|
| 90 |
+
|
| 91 |
+
- For each region, paint the linear gradient between its two jittered
|
| 92 |
+
endpoint colours along the chosen angle.
|
| 93 |
+
- Multiply by `(1 + amp · texture)` per pixel to add the per-region
|
| 94 |
+
texture modulation.
|
| 95 |
+
- Multiply by `(1 + 0.05 · global_speckle)` to add the canvas-wide
|
| 96 |
+
high-frequency noise that drowns Canny boundary detection.
|
| 97 |
+
- Composite the painted square onto the off-white canvas with a thin
|
| 98 |
+
dark border.
|
| 99 |
+
|
| 100 |
+
## Quality Checks
|
| 101 |
+
|
| 102 |
+
Reject or regenerate samples if:
|
| 103 |
+
|
| 104 |
+
- after cleanup the actual region count drops below 2
|
| 105 |
+
- any region is smaller than `min_region_frac` of the canvas
|
| 106 |
+
- gradient assignment fails to satisfy the adjacency constraints
|
| 107 |
+
|
| 108 |
+
The final `answer` field always reflects the post-cleanup region
|
| 109 |
+
count, which may differ from the sampled target `K`.
|
| 110 |
+
|
| 111 |
+
## Anti-Shortcut Notes
|
| 112 |
+
|
| 113 |
+
The previous version used dashed boundary lines on a uniform fill,
|
| 114 |
+
which was defeated in one step by `cv2.dilate(boundary, k)` followed
|
| 115 |
+
by `cv2.connectedComponents`. The new rendering blocks several attack
|
| 116 |
+
families simultaneously:
|
| 117 |
+
|
| 118 |
+
- **Boundary edge detection (Canny / Sobel)**: drowned by global
|
| 119 |
+
speckle + per-region texture so edges fire uniformly across the
|
| 120 |
+
whole image rather than only at region boundaries.
|
| 121 |
+
- **Colour histogram / k-means clustering**: the per-region HSV
|
| 122 |
+
jitter ensures each region renders unique pixel colours even when
|
| 123 |
+
two regions share the same palette anchors, so cluster counts have
|
| 124 |
+
no clean correspondence to region counts.
|
| 125 |
+
- **LAB-space quantisation + connected components**: same — the
|
| 126 |
+
jitter scatters pixels across many quantisation bins per region.
|
| 127 |
+
- **Smooth-then-detect (Gaussian / median / bilateral filter +
|
| 128 |
+
Canny)**: smoothing strong enough to kill the texture also blurs
|
| 129 |
+
region boundaries enough that small regions merge.
|
| 130 |
+
|
| 131 |
+
Empirically, the strongest CV attack tested (`median13 + Canny`) gives
|
| 132 |
+
MAE ~3.5 with 0/6 exact across a held-out set of v7 prototypes.
|
| 133 |
+
|
| 134 |
+
## Annotation Format
|
| 135 |
+
|
| 136 |
+
Each sample stores the partition metadata required to reproduce or
|
| 137 |
+
verify the answer:
|
| 138 |
+
|
| 139 |
+
```json
|
| 140 |
+
{
|
| 141 |
+
"image": "images/counting_regions_00000.png",
|
| 142 |
+
"width": 1024,
|
| 143 |
+
"height": 1024,
|
| 144 |
+
"grid_rows": 50,
|
| 145 |
+
"grid_cols": 50,
|
| 146 |
+
"square_left": 102.0,
|
| 147 |
+
"square_top": 102.0,
|
| 148 |
+
"square_size": 820.0,
|
| 149 |
+
"num_regions": 7,
|
| 150 |
+
"question": "How many separated regions are inside the square? ...",
|
| 151 |
+
"answer": 7,
|
| 152 |
+
"difficulty": "medium",
|
| 153 |
+
"region_seed_cells": [...],
|
| 154 |
+
"region_cell_counts": [...],
|
| 155 |
+
"region_adjacency": [[0, 2], [0, 3], ...]
|
| 156 |
+
}
|
| 157 |
+
```
|
| 158 |
+
|
| 159 |
+
## Output Organization
|
| 160 |
+
|
| 161 |
+
```text
|
| 162 |
+
counting_regions/
|
| 163 |
+
creation.py
|
| 164 |
+
creation.md
|
| 165 |
+
annotations.jsonl
|
| 166 |
+
data.json
|
| 167 |
+
images/
|
| 168 |
+
counting_regions_00000.png
|
| 169 |
+
...
|
| 170 |
+
```
|
code/distributed_scanning/counting_regions/creation.py
ADDED
|
@@ -0,0 +1,783 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
from __future__ import annotations
|
| 2 |
+
|
| 3 |
+
import argparse
|
| 4 |
+
import colorsys
|
| 5 |
+
import heapq
|
| 6 |
+
import json
|
| 7 |
+
import math
|
| 8 |
+
import random
|
| 9 |
+
from collections import deque
|
| 10 |
+
from pathlib import Path
|
| 11 |
+
from typing import Dict, List, Sequence, Tuple
|
| 12 |
+
|
| 13 |
+
import cv2
|
| 14 |
+
import numpy as np
|
| 15 |
+
from PIL import Image
|
| 16 |
+
from scipy.ndimage import zoom
|
| 17 |
+
from tqdm import tqdm
|
| 18 |
+
|
| 19 |
+
|
| 20 |
+
Cell = Tuple[int, int]
|
| 21 |
+
Point = Tuple[int, int]
|
| 22 |
+
Grid = List[List[int]]
|
| 23 |
+
|
| 24 |
+
|
| 25 |
+
# ── Palette + colour helpers ───────────────────────────────────────────
|
| 26 |
+
|
| 27 |
+
PALETTE = [
|
| 28 |
+
(236, 152, 99), # peach
|
| 29 |
+
(111, 163, 220), # clear blue
|
| 30 |
+
(140, 197, 113), # grass green
|
| 31 |
+
(223, 127, 186), # pink
|
| 32 |
+
(194, 160, 85), # ochre
|
| 33 |
+
(157, 134, 212), # purple
|
| 34 |
+
(237, 211, 81), # yellow
|
| 35 |
+
(75, 184, 173), # teal
|
| 36 |
+
(224, 122, 112), # coral red
|
| 37 |
+
(122, 160, 106), # sage
|
| 38 |
+
]
|
| 39 |
+
|
| 40 |
+
PALETTE_HUE = [
|
| 41 |
+
colorsys.rgb_to_hsv(r / 255, g / 255, b / 255)[0] * 360
|
| 42 |
+
for r, g, b in PALETTE
|
| 43 |
+
]
|
| 44 |
+
|
| 45 |
+
|
| 46 |
+
def hue_gap(h1: float, h2: float) -> float:
|
| 47 |
+
d = abs(h1 - h2) % 360
|
| 48 |
+
return min(d, 360 - d)
|
| 49 |
+
|
| 50 |
+
|
| 51 |
+
def pair_hue_gap(pair_a: Tuple[int, int], pair_b: Tuple[int, int]) -> float:
|
| 52 |
+
"""Maximum, over each colour in pair_a, of the nearest hue distance to
|
| 53 |
+
any colour in pair_b. Tells us how visually contrastable the two pairs
|
| 54 |
+
are at their closest point."""
|
| 55 |
+
best = 0.0
|
| 56 |
+
for ca in pair_a:
|
| 57 |
+
nearest = min(hue_gap(PALETTE_HUE[ca], PALETTE_HUE[cb]) for cb in pair_b)
|
| 58 |
+
best = max(best, nearest)
|
| 59 |
+
return best
|
| 60 |
+
|
| 61 |
+
|
| 62 |
+
def jitter_color(
|
| 63 |
+
rng: random.Random,
|
| 64 |
+
rgb: Tuple[int, int, int],
|
| 65 |
+
hue_jitter: float = 18.0,
|
| 66 |
+
sat_jitter: float = 0.12,
|
| 67 |
+
val_jitter: float = 0.10,
|
| 68 |
+
) -> Tuple[float, float, float]:
|
| 69 |
+
"""Apply small HSV perturbation to a palette anchor so each region's
|
| 70 |
+
rendered endpoints are unique (defeats colour-histogram attacks)."""
|
| 71 |
+
r, g, b = [c / 255.0 for c in rgb]
|
| 72 |
+
h, s, v = colorsys.rgb_to_hsv(r, g, b)
|
| 73 |
+
h = (h + rng.uniform(-hue_jitter, hue_jitter) / 360.0) % 1.0
|
| 74 |
+
s = max(0.0, min(1.0, s + rng.uniform(-sat_jitter, sat_jitter)))
|
| 75 |
+
v = max(0.0, min(1.0, v + rng.uniform(-val_jitter, val_jitter)))
|
| 76 |
+
rr, gg, bb = colorsys.hsv_to_rgb(h, s, v)
|
| 77 |
+
return (rr * 255.0, gg * 255.0, bb * 255.0)
|
| 78 |
+
|
| 79 |
+
|
| 80 |
+
# ── Partition (connected-component growing on a small grid) ────────────
|
| 81 |
+
|
| 82 |
+
def sample_spaced_cells(
|
| 83 |
+
rng: random.Random,
|
| 84 |
+
rows: int,
|
| 85 |
+
cols: int,
|
| 86 |
+
count: int,
|
| 87 |
+
min_dist: float,
|
| 88 |
+
max_attempts: int = 800,
|
| 89 |
+
) -> List[Cell]:
|
| 90 |
+
cells: List[Cell] = []
|
| 91 |
+
for _ in range(max_attempts):
|
| 92 |
+
if len(cells) == count:
|
| 93 |
+
break
|
| 94 |
+
candidate = (rng.randrange(rows), rng.randrange(cols))
|
| 95 |
+
if all(math.hypot(candidate[0] - r, candidate[1] - c) >= min_dist for r, c in cells):
|
| 96 |
+
cells.append(candidate)
|
| 97 |
+
while len(cells) < count:
|
| 98 |
+
candidate = (rng.randrange(rows), rng.randrange(cols))
|
| 99 |
+
if candidate not in cells:
|
| 100 |
+
cells.append(candidate)
|
| 101 |
+
return cells
|
| 102 |
+
|
| 103 |
+
|
| 104 |
+
def _make_noise_field(rng: random.Random, rows: int, cols: int, scale: int = 15) -> np.ndarray:
|
| 105 |
+
lo_r = max(2, rows // scale)
|
| 106 |
+
lo_c = max(2, cols // scale)
|
| 107 |
+
lo = np.array([[rng.gauss(0, 1) for _ in range(lo_c)] for _ in range(lo_r)])
|
| 108 |
+
field = zoom(lo, (rows / lo_r, cols / lo_c), order=1)
|
| 109 |
+
return field[:rows, :cols]
|
| 110 |
+
|
| 111 |
+
|
| 112 |
+
def make_connected_partition(
|
| 113 |
+
rng: random.Random,
|
| 114 |
+
rows: int,
|
| 115 |
+
cols: int,
|
| 116 |
+
num_regions: int,
|
| 117 |
+
) -> Tuple[np.ndarray, List[Cell]]:
|
| 118 |
+
"""Two-phase partition.
|
| 119 |
+
|
| 120 |
+
Phase 1 (sequential round-robin): each region in turn claims one
|
| 121 |
+
cell from its current frontier, repeated until every region has at
|
| 122 |
+
least `min_cells_per_region` cells. This guarantees a minimum size
|
| 123 |
+
for every region and prevents tiny / vanishing regions.
|
| 124 |
+
|
| 125 |
+
Phase 2 (priority-queue Voronoi-like): the remaining unclaimed cells
|
| 126 |
+
are distributed using the original distance-to-centroid + noise-field
|
| 127 |
+
cost so the final boundaries look organic.
|
| 128 |
+
"""
|
| 129 |
+
labels = np.full((rows, cols), -1, dtype=np.int32)
|
| 130 |
+
seeds = sample_spaced_cells(
|
| 131 |
+
rng=rng,
|
| 132 |
+
rows=rows,
|
| 133 |
+
cols=cols,
|
| 134 |
+
count=num_regions,
|
| 135 |
+
min_dist=max(2.2, min(rows, cols) / max(3.4, math.sqrt(num_regions) + 0.5)),
|
| 136 |
+
)
|
| 137 |
+
dirs = [(-1, 0), (1, 0), (0, -1), (0, 1)]
|
| 138 |
+
cr = [float(r) for r, c in seeds]
|
| 139 |
+
cc = [float(c) for r, c in seeds]
|
| 140 |
+
size = [1] * num_regions
|
| 141 |
+
|
| 142 |
+
# ── Phase 1: sequential round-robin grow ──
|
| 143 |
+
fair_share = (rows * cols) // num_regions
|
| 144 |
+
min_cells_per_region = max(8, (fair_share * 7) // 10) # 70% of fair share floor
|
| 145 |
+
frontiers: List[List[Cell]] = [[] for _ in range(num_regions)]
|
| 146 |
+
for rid, (r, c) in enumerate(seeds):
|
| 147 |
+
labels[r, c] = rid
|
| 148 |
+
for dr, dc in dirs:
|
| 149 |
+
nr, nc = r + dr, c + dc
|
| 150 |
+
if 0 <= nr < rows and 0 <= nc < cols and labels[nr, nc] == -1:
|
| 151 |
+
frontiers[rid].append((nr, nc))
|
| 152 |
+
|
| 153 |
+
while True:
|
| 154 |
+
# Stop when every region reached the minimum, OR no region can grow.
|
| 155 |
+
if all(s >= min_cells_per_region for s in size):
|
| 156 |
+
break
|
| 157 |
+
progress = False
|
| 158 |
+
for rid in range(num_regions):
|
| 159 |
+
if size[rid] >= min_cells_per_region:
|
| 160 |
+
continue
|
| 161 |
+
# Pull a random claimable cell from this region's frontier.
|
| 162 |
+
f = frontiers[rid]
|
| 163 |
+
while f:
|
| 164 |
+
idx = rng.randrange(len(f))
|
| 165 |
+
nr, nc = f.pop(idx)
|
| 166 |
+
if labels[nr, nc] != -1:
|
| 167 |
+
continue
|
| 168 |
+
labels[nr, nc] = rid
|
| 169 |
+
n = size[rid]
|
| 170 |
+
cr[rid] = (cr[rid] * n + nr) / (n + 1)
|
| 171 |
+
cc[rid] = (cc[rid] * n + nc) / (n + 1)
|
| 172 |
+
size[rid] += 1
|
| 173 |
+
for dr, dc in dirs:
|
| 174 |
+
rr, ccc = nr + dr, nc + dc
|
| 175 |
+
if 0 <= rr < rows and 0 <= ccc < cols and labels[rr, ccc] == -1:
|
| 176 |
+
f.append((rr, ccc))
|
| 177 |
+
progress = True
|
| 178 |
+
break
|
| 179 |
+
if not progress:
|
| 180 |
+
break # all frontiers exhausted (only possible at very high n)
|
| 181 |
+
|
| 182 |
+
# ── Phase 2: priority-queue grow on remaining cells ──
|
| 183 |
+
fields = [_make_noise_field(rng, rows, cols, scale=15) for _ in range(num_regions)]
|
| 184 |
+
noise_strength = rows * cols * 0.03
|
| 185 |
+
heap: list = []
|
| 186 |
+
for rid in range(num_regions):
|
| 187 |
+
for nr, nc in frontiers[rid]:
|
| 188 |
+
if labels[nr, nc] == -1:
|
| 189 |
+
d = (nr - cr[rid]) ** 2 + (nc - cc[rid]) ** 2
|
| 190 |
+
d += fields[rid][nr, nc] * noise_strength
|
| 191 |
+
d += rng.random() * 5.0
|
| 192 |
+
heapq.heappush(heap, (d, rid, nr, nc))
|
| 193 |
+
while heap:
|
| 194 |
+
_, rid, nr, nc = heapq.heappop(heap)
|
| 195 |
+
if labels[nr, nc] != -1:
|
| 196 |
+
continue
|
| 197 |
+
labels[nr, nc] = rid
|
| 198 |
+
n = size[rid]
|
| 199 |
+
cr[rid] = (cr[rid] * n + nr) / (n + 1)
|
| 200 |
+
cc[rid] = (cc[rid] * n + nc) / (n + 1)
|
| 201 |
+
size[rid] += 1
|
| 202 |
+
for dr, dc in dirs:
|
| 203 |
+
rr, ccc = nr + dr, nc + dc
|
| 204 |
+
if 0 <= rr < rows and 0 <= ccc < cols and labels[rr, ccc] == -1:
|
| 205 |
+
d = (rr - cr[rid]) ** 2 + (ccc - cc[rid]) ** 2
|
| 206 |
+
d += fields[rid][rr, ccc] * noise_strength
|
| 207 |
+
d += rng.random() * 5.0
|
| 208 |
+
heapq.heappush(heap, (d, rid, rr, ccc))
|
| 209 |
+
return labels, seeds
|
| 210 |
+
|
| 211 |
+
|
| 212 |
+
def upsample_labels(
|
| 213 |
+
grid_labels: np.ndarray,
|
| 214 |
+
canvas_w: int,
|
| 215 |
+
canvas_h: int,
|
| 216 |
+
smooth_sigma: float = 4.5,
|
| 217 |
+
) -> np.ndarray:
|
| 218 |
+
"""Upsample a low-resolution label map to canvas size with smooth
|
| 219 |
+
boundaries by per-region soft-mask interpolation + argmax."""
|
| 220 |
+
num_regions = int(grid_labels.max()) + 1
|
| 221 |
+
soft = np.empty((num_regions, canvas_h, canvas_w), dtype=np.float32)
|
| 222 |
+
for rid in range(num_regions):
|
| 223 |
+
mask = (grid_labels == rid).astype(np.float32)
|
| 224 |
+
big = cv2.resize(mask, (canvas_w, canvas_h), interpolation=cv2.INTER_LINEAR)
|
| 225 |
+
if smooth_sigma > 0:
|
| 226 |
+
big = cv2.GaussianBlur(big, (0, 0), sigmaX=smooth_sigma, sigmaY=smooth_sigma)
|
| 227 |
+
soft[rid] = big
|
| 228 |
+
return np.argmax(soft, axis=0).astype(np.int32)
|
| 229 |
+
|
| 230 |
+
|
| 231 |
+
def clean_tiny_components(labels: np.ndarray, min_frac: float = 0.002) -> np.ndarray:
|
| 232 |
+
"""Absorb tiny connected-component specks into the dominant neighbour
|
| 233 |
+
label. Small ``min_frac`` keeps the cleanup conservative."""
|
| 234 |
+
h, w = labels.shape
|
| 235 |
+
threshold = int(h * w * min_frac)
|
| 236 |
+
out = labels.copy()
|
| 237 |
+
num_regions = int(labels.max()) + 1
|
| 238 |
+
for rid in range(num_regions):
|
| 239 |
+
mask = (out == rid).astype(np.uint8)
|
| 240 |
+
num, comp = cv2.connectedComponents(mask)
|
| 241 |
+
sizes = [int((comp == i).sum()) for i in range(num)]
|
| 242 |
+
if num <= 2:
|
| 243 |
+
continue
|
| 244 |
+
keep = max(range(1, num), key=lambda i: sizes[i])
|
| 245 |
+
for i in range(1, num):
|
| 246 |
+
if i == keep:
|
| 247 |
+
continue
|
| 248 |
+
if sizes[i] < threshold:
|
| 249 |
+
small = (comp == i)
|
| 250 |
+
ys, xs = np.where(small)
|
| 251 |
+
nbrs = []
|
| 252 |
+
for dy, dx in [(-1, 0), (1, 0), (0, -1), (0, 1)]:
|
| 253 |
+
ny, nx = np.clip(ys + dy, 0, h - 1), np.clip(xs + dx, 0, w - 1)
|
| 254 |
+
nbrs.append(out[ny, nx])
|
| 255 |
+
nbr_labels = np.concatenate(nbrs)
|
| 256 |
+
nbr_labels = nbr_labels[nbr_labels != rid]
|
| 257 |
+
if len(nbr_labels) == 0:
|
| 258 |
+
continue
|
| 259 |
+
replacement = int(np.bincount(nbr_labels).argmax())
|
| 260 |
+
out[small] = replacement
|
| 261 |
+
return out
|
| 262 |
+
|
| 263 |
+
|
| 264 |
+
def relabel_contiguous(labels: np.ndarray) -> Tuple[np.ndarray, int]:
|
| 265 |
+
unique = np.unique(labels)
|
| 266 |
+
remap = -np.ones(int(unique.max()) + 1, dtype=np.int32)
|
| 267 |
+
for new_id, old_id in enumerate(unique):
|
| 268 |
+
remap[old_id] = new_id
|
| 269 |
+
return remap[labels], len(unique)
|
| 270 |
+
|
| 271 |
+
|
| 272 |
+
def adjacency_from_labels(labels: np.ndarray) -> Dict[int, set]:
|
| 273 |
+
n = int(labels.max()) + 1
|
| 274 |
+
adj: Dict[int, set] = {rid: set() for rid in range(n)}
|
| 275 |
+
diff_h = labels[:, :-1] != labels[:, 1:]
|
| 276 |
+
a_h = labels[:, :-1][diff_h]
|
| 277 |
+
b_h = labels[:, 1:][diff_h]
|
| 278 |
+
diff_v = labels[:-1, :] != labels[1:, :]
|
| 279 |
+
a_v = labels[:-1, :][diff_v]
|
| 280 |
+
b_v = labels[1:, :][diff_v]
|
| 281 |
+
for a, b in zip(a_h.tolist(), b_h.tolist()):
|
| 282 |
+
adj[a].add(b)
|
| 283 |
+
adj[b].add(a)
|
| 284 |
+
for a, b in zip(a_v.tolist(), b_v.tolist()):
|
| 285 |
+
adj[a].add(b)
|
| 286 |
+
adj[b].add(a)
|
| 287 |
+
return adj
|
| 288 |
+
|
| 289 |
+
|
| 290 |
+
def adjacency_pairs(labels: np.ndarray) -> List[Tuple[int, int]]:
|
| 291 |
+
adj = adjacency_from_labels(labels)
|
| 292 |
+
edges: set[Tuple[int, int]] = set()
|
| 293 |
+
for a, neighbours in adj.items():
|
| 294 |
+
for b in neighbours:
|
| 295 |
+
if a < b:
|
| 296 |
+
edges.add((a, b))
|
| 297 |
+
return sorted(edges)
|
| 298 |
+
|
| 299 |
+
|
| 300 |
+
# ── Gradient assignment with adjacency constraints ────────────────────
|
| 301 |
+
|
| 302 |
+
def assign_gradients(
|
| 303 |
+
rng: random.Random,
|
| 304 |
+
num_regions: int,
|
| 305 |
+
adj: Dict[int, set],
|
| 306 |
+
hue_gap_min: float = 55.0,
|
| 307 |
+
max_attempts: int = 400,
|
| 308 |
+
) -> List[Tuple[Tuple[float, float, float], Tuple[float, float, float], float]] | None:
|
| 309 |
+
"""Greedy assignment with restart. Adjacent regions must differ in
|
| 310 |
+
their unordered colour pair AND have at least ``hue_gap_min`` degrees
|
| 311 |
+
of hue separation at their closest pairing."""
|
| 312 |
+
palette_size = len(PALETTE)
|
| 313 |
+
order = sorted(range(num_regions), key=lambda r: -len(adj[r]))
|
| 314 |
+
|
| 315 |
+
# Since pure-flat fill uses only the FIRST colour of each pair, enforce
|
| 316 |
+
# that the FIRST hues of adjacent regions are well separated, in addition
|
| 317 |
+
# to the original pair-hue-gap constraint.
|
| 318 |
+
PRIMARY_HUE_GAP_MIN = 60.0
|
| 319 |
+
|
| 320 |
+
for _ in range(max_attempts):
|
| 321 |
+
pairs: List[Tuple[int, int] | None] = [None] * num_regions
|
| 322 |
+
ok = True
|
| 323 |
+
for rid in order:
|
| 324 |
+
cand = []
|
| 325 |
+
for a in range(palette_size):
|
| 326 |
+
for b in range(a + 1, palette_size):
|
| 327 |
+
cand.append((a, b))
|
| 328 |
+
rng.shuffle(cand)
|
| 329 |
+
chosen = None
|
| 330 |
+
for a, b in cand:
|
| 331 |
+
valid = True
|
| 332 |
+
for n in adj[rid]:
|
| 333 |
+
if pairs[n] is None:
|
| 334 |
+
continue
|
| 335 |
+
na, nb = pairs[n][0], pairs[n][1]
|
| 336 |
+
if {a, b} == {na, nb}:
|
| 337 |
+
valid = False
|
| 338 |
+
break
|
| 339 |
+
if pair_hue_gap((a, b), (na, nb)) < hue_gap_min:
|
| 340 |
+
valid = False
|
| 341 |
+
break
|
| 342 |
+
if hue_gap(PALETTE_HUE[a], PALETTE_HUE[na]) < PRIMARY_HUE_GAP_MIN:
|
| 343 |
+
valid = False
|
| 344 |
+
break
|
| 345 |
+
if valid:
|
| 346 |
+
chosen = (a, b)
|
| 347 |
+
break
|
| 348 |
+
if chosen is None:
|
| 349 |
+
ok = False
|
| 350 |
+
break
|
| 351 |
+
pairs[rid] = chosen
|
| 352 |
+
if ok:
|
| 353 |
+
angles = [rng.uniform(0, 2 * math.pi) for _ in range(num_regions)]
|
| 354 |
+
jittered = []
|
| 355 |
+
for p, ang in zip(pairs, angles):
|
| 356 |
+
ca = jitter_color(rng, PALETTE[p[0]])
|
| 357 |
+
cb = jitter_color(rng, PALETTE[p[1]])
|
| 358 |
+
jittered.append((ca, cb, ang))
|
| 359 |
+
return jittered
|
| 360 |
+
return None
|
| 361 |
+
|
| 362 |
+
|
| 363 |
+
# ── Per-region textures (defeat canny + colour-cluster attacks) ───────
|
| 364 |
+
|
| 365 |
+
TEXTURE_STYLES = ["speckle", "stripes_h", "stripes_v", "stripes_d1",
|
| 366 |
+
"stripes_d2", "dots", "perlin"]
|
| 367 |
+
|
| 368 |
+
|
| 369 |
+
def assign_textures(
|
| 370 |
+
rng: random.Random,
|
| 371 |
+
num_regions: int,
|
| 372 |
+
adj: Dict[int, set],
|
| 373 |
+
) -> List[Tuple[str, Dict[str, float]]]:
|
| 374 |
+
"""Pick a (style, params) per region. Adjacent regions get different
|
| 375 |
+
styles where possible so texture acts as a region cue for humans."""
|
| 376 |
+
out: List[Tuple[str, Dict[str, float]]] = []
|
| 377 |
+
for rid in range(num_regions):
|
| 378 |
+
used_by_neighbours = {out[n][0] for n in adj[rid] if n < rid}
|
| 379 |
+
candidates = [s for s in TEXTURE_STYLES if s not in used_by_neighbours]
|
| 380 |
+
if not candidates:
|
| 381 |
+
candidates = TEXTURE_STYLES
|
| 382 |
+
style = rng.choice(candidates)
|
| 383 |
+
params = {
|
| 384 |
+
"freq": rng.uniform(0.08, 0.22),
|
| 385 |
+
"amp": rng.uniform(0.08, 0.16),
|
| 386 |
+
"orient": rng.uniform(0, math.pi),
|
| 387 |
+
"speckle_seed": rng.randint(0, 2**31 - 1),
|
| 388 |
+
}
|
| 389 |
+
out.append((style, params))
|
| 390 |
+
return out
|
| 391 |
+
|
| 392 |
+
|
| 393 |
+
def make_texture(style: str, params: Dict[str, float], h: int, w: int, rng_seed: int) -> np.ndarray:
|
| 394 |
+
rng = np.random.default_rng(rng_seed)
|
| 395 |
+
ys, xs = np.mgrid[0:h, 0:w].astype(np.float32)
|
| 396 |
+
freq = params["freq"]
|
| 397 |
+
orient = params["orient"]
|
| 398 |
+
if style == "speckle":
|
| 399 |
+
n = rng.standard_normal((h, w)).astype(np.float32)
|
| 400 |
+
return cv2.GaussianBlur(n, (0, 0), sigmaX=0.7) * 1.4
|
| 401 |
+
if style.startswith("stripes"):
|
| 402 |
+
if style == "stripes_h":
|
| 403 |
+
theta = 0.0
|
| 404 |
+
elif style == "stripes_v":
|
| 405 |
+
theta = math.pi / 2
|
| 406 |
+
elif style == "stripes_d1":
|
| 407 |
+
theta = math.pi / 4
|
| 408 |
+
else:
|
| 409 |
+
theta = -math.pi / 4
|
| 410 |
+
u = xs * math.cos(theta) + ys * math.sin(theta)
|
| 411 |
+
return np.sin(u * freq * 2 * math.pi).astype(np.float32)
|
| 412 |
+
if style == "dots":
|
| 413 |
+
u = xs * freq * 2 * math.pi
|
| 414 |
+
v = ys * freq * 2 * math.pi
|
| 415 |
+
return (np.sin(u) * np.sin(v)).astype(np.float32) * 1.2
|
| 416 |
+
if style == "perlin":
|
| 417 |
+
a = np.sin((xs * math.cos(orient) + ys * math.sin(orient)) * freq * 2 * math.pi)
|
| 418 |
+
b = np.cos((xs * math.cos(orient + 1.1) + ys * math.sin(orient + 1.1)) * freq * 1.7 * 2 * math.pi)
|
| 419 |
+
return ((a + b) * 0.5).astype(np.float32)
|
| 420 |
+
return np.zeros((h, w), dtype=np.float32)
|
| 421 |
+
|
| 422 |
+
|
| 423 |
+
# ── Renderer ──────────────────────────────────────────────────────────
|
| 424 |
+
|
| 425 |
+
def render_region_canvas(
|
| 426 |
+
labels: np.ndarray,
|
| 427 |
+
assignments: List[Tuple],
|
| 428 |
+
textures: List[Tuple[str, Dict[str, float]]],
|
| 429 |
+
global_speckle: float = 0.05,
|
| 430 |
+
) -> np.ndarray:
|
| 431 |
+
"""Paint each region with its (gradient + texture). No drawn boundary
|
| 432 |
+
lines — region discrimination relies on colour discontinuity and
|
| 433 |
+
texture-style change."""
|
| 434 |
+
h, w = labels.shape
|
| 435 |
+
# SINGLE global base colour (warm light grey) across the whole canvas.
|
| 436 |
+
# Region labels and per-region palette assignments are intentionally
|
| 437 |
+
# ignored: the goal is one uniform texture that flows continuously
|
| 438 |
+
# across the entire image, so the only visible cue for the partition
|
| 439 |
+
# is the boundary line drawn on top by render_instance.
|
| 440 |
+
base = np.array([220.0, 215.0, 205.0], dtype=np.float32)
|
| 441 |
+
img = np.broadcast_to(base, (h, w, 3)).copy()
|
| 442 |
+
|
| 443 |
+
# Two-octave smooth noise field across the whole canvas, plus fine grain.
|
| 444 |
+
rng_np = np.random.default_rng(0xC0FFEE)
|
| 445 |
+
big = rng_np.standard_normal((h, w)).astype(np.float32)
|
| 446 |
+
big = cv2.GaussianBlur(big, (0, 0), sigmaX=h / 6.0, sigmaY=h / 6.0)
|
| 447 |
+
med = rng_np.standard_normal((h, w)).astype(np.float32)
|
| 448 |
+
med = cv2.GaussianBlur(med, (0, 0), sigmaX=h / 18.0, sigmaY=h / 18.0)
|
| 449 |
+
noise = big / (big.std() + 1e-6) + 0.7 * med / (med.std() + 1e-6)
|
| 450 |
+
noise /= (noise.std() + 1e-6)
|
| 451 |
+
# Multiplicative brightness modulation across the whole canvas: ±25%.
|
| 452 |
+
img = img * (1.0 + 0.25 * noise[..., None])
|
| 453 |
+
# Per-channel fine grain (±8 brightness units).
|
| 454 |
+
grain = rng_np.standard_normal((h, w, 3)).astype(np.float32) * 8.0
|
| 455 |
+
img = img + grain
|
| 456 |
+
return np.clip(img, 0, 255).astype(np.uint8)
|
| 457 |
+
|
| 458 |
+
|
| 459 |
+
BG_COLOR = (248, 246, 240)
|
| 460 |
+
BORDER_COLOR = (45, 39, 32)
|
| 461 |
+
|
| 462 |
+
|
| 463 |
+
def composite_full_image(painted: np.ndarray, width: int, height: int) -> np.ndarray:
|
| 464 |
+
"""Paste the painted region square onto the full canvas with a thin
|
| 465 |
+
dark border around it."""
|
| 466 |
+
canvas_size = painted.shape[0]
|
| 467 |
+
margin = (min(width, height) - canvas_size) // 2
|
| 468 |
+
full = np.full((height, width, 3), BG_COLOR, dtype=np.uint8)
|
| 469 |
+
|
| 470 |
+
border = max(2, canvas_size // 256)
|
| 471 |
+
y0 = margin
|
| 472 |
+
x0 = (width - canvas_size) // 2
|
| 473 |
+
full[y0:y0 + canvas_size, x0:x0 + canvas_size] = painted
|
| 474 |
+
full[y0 - border:y0, x0 - border:x0 + canvas_size + border] = BORDER_COLOR
|
| 475 |
+
full[y0 + canvas_size:y0 + canvas_size + border, x0 - border:x0 + canvas_size + border] = BORDER_COLOR
|
| 476 |
+
full[y0 - border:y0 + canvas_size + border, x0 - border:x0] = BORDER_COLOR
|
| 477 |
+
full[y0 - border:y0 + canvas_size + border, x0 + canvas_size:x0 + canvas_size + border] = BORDER_COLOR
|
| 478 |
+
return full
|
| 479 |
+
|
| 480 |
+
|
| 481 |
+
# ── Sampling: build one valid instance ────────────────────────────────
|
| 482 |
+
|
| 483 |
+
def difficulty_for_regions(num_regions: int) -> str:
|
| 484 |
+
if num_regions <= 6:
|
| 485 |
+
return "easy"
|
| 486 |
+
if num_regions <= 10:
|
| 487 |
+
return "medium"
|
| 488 |
+
return "hard"
|
| 489 |
+
|
| 490 |
+
|
| 491 |
+
def count_region_cells(labels: np.ndarray, num_regions: int) -> List[int]:
|
| 492 |
+
return [int((labels == rid).sum()) for rid in range(num_regions)]
|
| 493 |
+
|
| 494 |
+
|
| 495 |
+
def sample_instance(
|
| 496 |
+
rng: random.Random,
|
| 497 |
+
width: int,
|
| 498 |
+
height: int,
|
| 499 |
+
min_regions: int,
|
| 500 |
+
max_regions: int,
|
| 501 |
+
grid_rows: int,
|
| 502 |
+
grid_cols: int,
|
| 503 |
+
canvas_size: int,
|
| 504 |
+
min_region_frac: float,
|
| 505 |
+
hue_gap_min: float,
|
| 506 |
+
forced_target: int | None = None,
|
| 507 |
+
) -> Dict[str, object] | None:
|
| 508 |
+
"""Build one valid instance or return None if rejected. If forced_target
|
| 509 |
+
is given, only accept samples whose actual region count equals it."""
|
| 510 |
+
max_feasible_regions = min(max_regions, max(2, (grid_rows * grid_cols) // 6))
|
| 511 |
+
min_feasible_regions = min(min_regions, max_feasible_regions)
|
| 512 |
+
if forced_target is not None:
|
| 513 |
+
target_n = max(2, min(max_feasible_regions, forced_target))
|
| 514 |
+
else:
|
| 515 |
+
target_n = rng.randint(max(2, min_feasible_regions), max_feasible_regions)
|
| 516 |
+
|
| 517 |
+
grid_labels, seeds = make_connected_partition(rng, grid_rows, grid_cols, target_n)
|
| 518 |
+
|
| 519 |
+
if forced_target is not None:
|
| 520 |
+
# 200x200 grid + 50%-fair-share phase-1 floor makes every region
|
| 521 |
+
# big enough to survive heavy smoothing. Use the smoothed argmax
|
| 522 |
+
# path so boundaries are smooth curves instead of NEAREST staircase.
|
| 523 |
+
canvas_labels = upsample_labels(grid_labels, canvas_size, canvas_size, smooth_sigma=14.0)
|
| 524 |
+
else:
|
| 525 |
+
canvas_labels = upsample_labels(grid_labels, canvas_size, canvas_size, smooth_sigma=9.0)
|
| 526 |
+
if forced_target is None:
|
| 527 |
+
# Normal path: clean tiny specks and apply min-area floor.
|
| 528 |
+
canvas_labels = clean_tiny_components(canvas_labels, min_frac=0.002)
|
| 529 |
+
canvas_labels, actual_n = relabel_contiguous(canvas_labels)
|
| 530 |
+
if actual_n < 2:
|
| 531 |
+
return None
|
| 532 |
+
region_pixels = count_region_cells(canvas_labels, actual_n)
|
| 533 |
+
min_pixels = int(canvas_size * canvas_size * min_region_frac)
|
| 534 |
+
if min(region_pixels) < min_pixels:
|
| 535 |
+
return None
|
| 536 |
+
else:
|
| 537 |
+
# Forced path: trust the seeded grow — the partition produced exactly
|
| 538 |
+
# forced_target connected regions. Skip clean_tiny_components and the
|
| 539 |
+
# min-area floor so the count is honoured exactly.
|
| 540 |
+
canvas_labels, actual_n = relabel_contiguous(canvas_labels)
|
| 541 |
+
if actual_n != forced_target:
|
| 542 |
+
return None
|
| 543 |
+
region_pixels = count_region_cells(canvas_labels, actual_n)
|
| 544 |
+
|
| 545 |
+
adj = adjacency_from_labels(canvas_labels)
|
| 546 |
+
assignments = assign_gradients(rng, actual_n, adj, hue_gap_min=hue_gap_min)
|
| 547 |
+
if assignments is None:
|
| 548 |
+
return None
|
| 549 |
+
textures = assign_textures(rng, actual_n, adj)
|
| 550 |
+
|
| 551 |
+
margin = int(min(width, height) * 0.12)
|
| 552 |
+
square_size = canvas_size
|
| 553 |
+
square_left = (width - square_size) // 2
|
| 554 |
+
square_top = (height - square_size) // 2
|
| 555 |
+
|
| 556 |
+
return {
|
| 557 |
+
"width": width,
|
| 558 |
+
"height": height,
|
| 559 |
+
"grid_rows": grid_rows,
|
| 560 |
+
"grid_cols": grid_cols,
|
| 561 |
+
"square_left": float(square_left),
|
| 562 |
+
"square_top": float(square_top),
|
| 563 |
+
"square_size": float(square_size),
|
| 564 |
+
"num_regions": actual_n,
|
| 565 |
+
"question": (
|
| 566 |
+
"How many separated regions are inside the square? "
|
| 567 |
+
"A region is a maximal area inside the square that is filled with "
|
| 568 |
+
"one continuous colour pattern. Two locations belong to the same "
|
| 569 |
+
"region if you can travel between them without crossing into a "
|
| 570 |
+
"differently-coloured area. Count every distinct region inside the "
|
| 571 |
+
"square and report the total as a positive integer. "
|
| 572 |
+
"Provide your final answer enclosed in <answer>...</answer> tags."
|
| 573 |
+
),
|
| 574 |
+
"answer": actual_n,
|
| 575 |
+
"difficulty": difficulty_for_regions(actual_n),
|
| 576 |
+
"region_seed_cells": [[int(r), int(c)] for r, c in seeds],
|
| 577 |
+
"region_cell_counts": region_pixels,
|
| 578 |
+
"region_adjacency": [[int(a), int(b)] for a, b in adjacency_pairs(canvas_labels)],
|
| 579 |
+
"_canvas_labels": canvas_labels,
|
| 580 |
+
"_assignments": assignments,
|
| 581 |
+
"_textures": textures,
|
| 582 |
+
}
|
| 583 |
+
|
| 584 |
+
|
| 585 |
+
def render_instance(out_path: Path, record: Dict[str, object]) -> None:
|
| 586 |
+
"""Coloured fill renderer: each region is filled with its assigned
|
| 587 |
+
gradient + texture, plus a thin dark boundary line drawn along the
|
| 588 |
+
label-difference set so triple junctions never have hollow gaps."""
|
| 589 |
+
canvas_labels: np.ndarray = record.pop("_canvas_labels") # type: ignore[assignment]
|
| 590 |
+
assignments = record.pop("_assignments")
|
| 591 |
+
textures = record.pop("_textures")
|
| 592 |
+
width = int(record["width"])
|
| 593 |
+
height = int(record["height"])
|
| 594 |
+
|
| 595 |
+
h, w = canvas_labels.shape
|
| 596 |
+
painted = render_region_canvas(canvas_labels, assignments, textures)
|
| 597 |
+
line_color = (32, 32, 32)
|
| 598 |
+
line_thickness = max(1, min(h, w) // 700)
|
| 599 |
+
|
| 600 |
+
# Smooth labels first via per-region one-hot → blur → argmax. This
|
| 601 |
+
# removes the staircase from the original integer upsample but, crucially,
|
| 602 |
+
# gives a SINGLE consistent label map (no triple-junction gaps).
|
| 603 |
+
num_regions = int(canvas_labels.max()) + 1
|
| 604 |
+
smooth_sigma = max(2.5, min(h, w) / 140.0)
|
| 605 |
+
soft = np.empty((num_regions, h, w), dtype=np.float32)
|
| 606 |
+
for rid in range(num_regions):
|
| 607 |
+
mask = (canvas_labels == rid).astype(np.float32)
|
| 608 |
+
soft[rid] = cv2.GaussianBlur(mask, (0, 0), sigmaX=smooth_sigma,
|
| 609 |
+
sigmaY=smooth_sigma)
|
| 610 |
+
smoothed_labels = np.argmax(soft, axis=0).astype(np.int32)
|
| 611 |
+
|
| 612 |
+
# Boundary mask: a pixel is on a boundary iff any of its 4-neighbours has
|
| 613 |
+
# a different label. This produces a single closed curve per junction
|
| 614 |
+
# with no hollow gap at triple points.
|
| 615 |
+
boundary = np.zeros((h, w), dtype=np.uint8)
|
| 616 |
+
diff_h = smoothed_labels[:, :-1] != smoothed_labels[:, 1:]
|
| 617 |
+
diff_v = smoothed_labels[:-1, :] != smoothed_labels[1:, :]
|
| 618 |
+
boundary[:, :-1][diff_h] = 1
|
| 619 |
+
boundary[:, 1:][diff_h] = 1
|
| 620 |
+
boundary[:-1, :][diff_v] = 1
|
| 621 |
+
boundary[1:, :][diff_v] = 1
|
| 622 |
+
if line_thickness > 1:
|
| 623 |
+
kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (line_thickness, line_thickness))
|
| 624 |
+
boundary = cv2.dilate(boundary, kernel)
|
| 625 |
+
painted[boundary > 0] = line_color
|
| 626 |
+
|
| 627 |
+
full = composite_full_image(painted, width, height)
|
| 628 |
+
Image.fromarray(full).save(out_path)
|
| 629 |
+
|
| 630 |
+
|
| 631 |
+
# ── Dataset generation ────────────────────────────────────────────────
|
| 632 |
+
|
| 633 |
+
def ensure_output_dir(root: Path) -> Tuple[Path, Path]:
|
| 634 |
+
root.mkdir(parents=True, exist_ok=True)
|
| 635 |
+
images_dir = root / "images"
|
| 636 |
+
images_dir.mkdir(exist_ok=True)
|
| 637 |
+
return root, images_dir
|
| 638 |
+
|
| 639 |
+
|
| 640 |
+
def generate_dataset(
|
| 641 |
+
rng: random.Random,
|
| 642 |
+
count: int,
|
| 643 |
+
output_dir: Path,
|
| 644 |
+
images_dir: Path,
|
| 645 |
+
width: int,
|
| 646 |
+
height: int,
|
| 647 |
+
min_regions: int,
|
| 648 |
+
max_regions: int,
|
| 649 |
+
grid_rows: int,
|
| 650 |
+
grid_cols: int,
|
| 651 |
+
canvas_size: int,
|
| 652 |
+
min_region_frac: float,
|
| 653 |
+
hue_gap_min: float,
|
| 654 |
+
forced_targets: List[int] | None = None,
|
| 655 |
+
) -> None:
|
| 656 |
+
records: List[Dict[str, object]] = []
|
| 657 |
+
data_records: List[Dict[str, object]] = []
|
| 658 |
+
pbar = tqdm(total=count, desc="counting_regions")
|
| 659 |
+
idx = 0
|
| 660 |
+
rejects = 0
|
| 661 |
+
while idx < count:
|
| 662 |
+
target = (forced_targets[idx] if forced_targets and idx < len(forced_targets) else None)
|
| 663 |
+
record = sample_instance(
|
| 664 |
+
rng=rng,
|
| 665 |
+
width=width,
|
| 666 |
+
height=height,
|
| 667 |
+
min_regions=min_regions,
|
| 668 |
+
max_regions=max_regions,
|
| 669 |
+
grid_rows=grid_rows,
|
| 670 |
+
grid_cols=grid_cols,
|
| 671 |
+
canvas_size=canvas_size,
|
| 672 |
+
min_region_frac=min_region_frac,
|
| 673 |
+
hue_gap_min=hue_gap_min,
|
| 674 |
+
forced_target=target,
|
| 675 |
+
)
|
| 676 |
+
if record is None:
|
| 677 |
+
rejects += 1
|
| 678 |
+
continue
|
| 679 |
+
image_name = f"counting_regions_{idx:05d}.png"
|
| 680 |
+
render_instance(images_dir / image_name, record)
|
| 681 |
+
record["image"] = f"images/{image_name}"
|
| 682 |
+
records.append(record)
|
| 683 |
+
data_records.append({
|
| 684 |
+
"image": record["image"],
|
| 685 |
+
"question": record["question"],
|
| 686 |
+
"answer": record["answer"],
|
| 687 |
+
})
|
| 688 |
+
idx += 1
|
| 689 |
+
pbar.update(1)
|
| 690 |
+
pbar.set_postfix(answer=record["answer"], rejects=rejects)
|
| 691 |
+
pbar.close()
|
| 692 |
+
|
| 693 |
+
with (output_dir / "annotations.jsonl").open("w", encoding="utf-8") as fh:
|
| 694 |
+
for record in records:
|
| 695 |
+
fh.write(json.dumps(record) + "\n")
|
| 696 |
+
|
| 697 |
+
data_json = {
|
| 698 |
+
"task": "counting_regions",
|
| 699 |
+
"category": "distributed_scanning",
|
| 700 |
+
"count": len(data_records),
|
| 701 |
+
"items": data_records,
|
| 702 |
+
}
|
| 703 |
+
with (output_dir / "data.json").open("w", encoding="utf-8") as fh:
|
| 704 |
+
json.dump(data_json, fh, indent=2)
|
| 705 |
+
|
| 706 |
+
|
| 707 |
+
def parse_args() -> argparse.Namespace:
|
| 708 |
+
parser = argparse.ArgumentParser(description="Generate a counting-regions dataset.")
|
| 709 |
+
parser.add_argument("--output-root", type=Path, required=True)
|
| 710 |
+
parser.add_argument("--count", type=int, default=36)
|
| 711 |
+
parser.add_argument("--width", type=int, default=1024)
|
| 712 |
+
parser.add_argument("--height", type=int, default=1024)
|
| 713 |
+
parser.add_argument("--min-regions", type=int, default=6)
|
| 714 |
+
parser.add_argument("--max-regions", type=int, default=12)
|
| 715 |
+
parser.add_argument("--grid-rows", type=int, default=200)
|
| 716 |
+
parser.add_argument("--grid-cols", type=int, default=200)
|
| 717 |
+
parser.add_argument("--canvas-size", type=int, default=820,
|
| 718 |
+
help="Pixel side of the painted square inside the image.")
|
| 719 |
+
parser.add_argument("--min-region-frac", type=float, default=0.025,
|
| 720 |
+
help="Reject samples whose smallest region covers "
|
| 721 |
+
"less than this fraction of the canvas.")
|
| 722 |
+
parser.add_argument("--hue-gap-min", type=float, default=55.0)
|
| 723 |
+
parser.add_argument("--seed", type=int, default=23)
|
| 724 |
+
parser.add_argument("--difficulty", type=int, default=5,
|
| 725 |
+
help="Integer difficulty >=0; scales region count.")
|
| 726 |
+
return parser.parse_args()
|
| 727 |
+
|
| 728 |
+
|
| 729 |
+
def main() -> None:
|
| 730 |
+
args = parse_args()
|
| 731 |
+
d = max(0, int(args.difficulty))
|
| 732 |
+
|
| 733 |
+
# Canvas scaling: N_d = 10 + d, N_0 = 10.
|
| 734 |
+
N_d = 10 + d
|
| 735 |
+
N_0 = 10
|
| 736 |
+
s = math.sqrt(max(1.0, N_d / N_0))
|
| 737 |
+
args.width = int(round(args.width * s))
|
| 738 |
+
args.height = int(round(args.height * s))
|
| 739 |
+
args.canvas_size = int(round(args.canvas_size * s))
|
| 740 |
+
|
| 741 |
+
rng = random.Random(args.seed)
|
| 742 |
+
output_dir, images_dir = ensure_output_dir(args.output_root)
|
| 743 |
+
|
| 744 |
+
# num_regions ∈ [5, 10 + d]
|
| 745 |
+
min_regions = 10
|
| 746 |
+
max_regions = 10 + 2 * d
|
| 747 |
+
# Auto-scale: min_region_frac = min(0.04, 1 / (2.5 * max_regions))
|
| 748 |
+
# Use max_regions (upper bound) as a conservative frac floor so any
|
| 749 |
+
# sampled count satisfies the constraint.
|
| 750 |
+
min_region_frac = min(0.04, 1.0 / (2.5 * max_regions))
|
| 751 |
+
|
| 752 |
+
# Force per-instance region counts evenly spaced across [min_regions,
|
| 753 |
+
# max_regions]. With count=5 and range=[10, 20] this yields [10, 13, 15, 18, 20].
|
| 754 |
+
if args.count > 1:
|
| 755 |
+
forced_targets = [
|
| 756 |
+
int(round(min_regions + i * (max_regions - min_regions) / (args.count - 1)))
|
| 757 |
+
for i in range(args.count)
|
| 758 |
+
]
|
| 759 |
+
else:
|
| 760 |
+
forced_targets = [min_regions]
|
| 761 |
+
|
| 762 |
+
generate_dataset(
|
| 763 |
+
rng=rng,
|
| 764 |
+
count=args.count,
|
| 765 |
+
output_dir=output_dir,
|
| 766 |
+
images_dir=images_dir,
|
| 767 |
+
width=args.width,
|
| 768 |
+
height=args.height,
|
| 769 |
+
min_regions=min_regions,
|
| 770 |
+
max_regions=max_regions,
|
| 771 |
+
grid_rows=args.grid_rows,
|
| 772 |
+
grid_cols=args.grid_cols,
|
| 773 |
+
canvas_size=args.canvas_size,
|
| 774 |
+
min_region_frac=min_region_frac,
|
| 775 |
+
hue_gap_min=args.hue_gap_min,
|
| 776 |
+
forced_targets=forced_targets,
|
| 777 |
+
)
|
| 778 |
+
print(f"forced region counts: {forced_targets}")
|
| 779 |
+
print(f"Saved dataset to {args.output_root} (canvas {args.width}x{args.height})")
|
| 780 |
+
|
| 781 |
+
|
| 782 |
+
if __name__ == "__main__":
|
| 783 |
+
main()
|
code/distributed_scanning/counting_regions/data.json
ADDED
|
@@ -0,0 +1,32 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"task": "counting_regions",
|
| 3 |
+
"category": "distributed_scanning",
|
| 4 |
+
"count": 5,
|
| 5 |
+
"items": [
|
| 6 |
+
{
|
| 7 |
+
"image": "images/counting_regions_00000.png",
|
| 8 |
+
"question": "How many separated regions are inside the image? A region is a maximal area filled with one continuous fill. Boundaries between regions may appear as continuous lines OR as a chain of small individual markers \u2014 discrete dots, footprints, dashes, or short segments aligned along a curve \u2014 that you must visually link together to recognize as a single boundary. Two locations belong to the same region if you can travel between them without crossing such a boundary (continuous or chained). Count every distinct region and report the total as a positive integer. Provide your final answer enclosed in <answer>...</answer> tags.",
|
| 9 |
+
"answer": 10
|
| 10 |
+
},
|
| 11 |
+
{
|
| 12 |
+
"image": "images/counting_regions_00001.png",
|
| 13 |
+
"question": "How many separated regions are inside the image? A region is a maximal area filled with one continuous fill. Boundaries between regions may appear as continuous lines OR as a chain of small individual markers \u2014 discrete dots, footprints, dashes, or short segments aligned along a curve \u2014 that you must visually link together to recognize as a single boundary. Two locations belong to the same region if you can travel between them without crossing such a boundary (continuous or chained). Count every distinct region and report the total as a positive integer. Provide your final answer enclosed in <answer>...</answer> tags.",
|
| 14 |
+
"answer": 12
|
| 15 |
+
},
|
| 16 |
+
{
|
| 17 |
+
"image": "images/counting_regions_00002.png",
|
| 18 |
+
"question": "How many separated regions are inside the image? A region is a maximal area filled with one continuous fill. Boundaries between regions may appear as continuous lines OR as a chain of small individual markers \u2014 discrete dots, footprints, dashes, or short segments aligned along a curve \u2014 that you must visually link together to recognize as a single boundary. Two locations belong to the same region if you can travel between them without crossing such a boundary (continuous or chained). Count every distinct region and report the total as a positive integer. Provide your final answer enclosed in <answer>...</answer> tags.",
|
| 19 |
+
"answer": 15
|
| 20 |
+
},
|
| 21 |
+
{
|
| 22 |
+
"image": "images/counting_regions_00003.png",
|
| 23 |
+
"question": "How many separated regions are inside the image? A region is a maximal area filled with one continuous fill. Boundaries between regions may appear as continuous lines OR as a chain of small individual markers \u2014 discrete dots, footprints, dashes, or short segments aligned along a curve \u2014 that you must visually link together to recognize as a single boundary. Two locations belong to the same region if you can travel between them without crossing such a boundary (continuous or chained). Count every distinct region and report the total as a positive integer. Provide your final answer enclosed in <answer>...</answer> tags.",
|
| 24 |
+
"answer": 18
|
| 25 |
+
},
|
| 26 |
+
{
|
| 27 |
+
"image": "images/counting_regions_00004.png",
|
| 28 |
+
"question": "How many separated regions are inside the image? A region is a maximal area filled with one continuous fill. Boundaries between regions may appear as continuous lines OR as a chain of small individual markers \u2014 discrete dots, footprints, dashes, or short segments aligned along a curve \u2014 that you must visually link together to recognize as a single boundary. Two locations belong to the same region if you can travel between them without crossing such a boundary (continuous or chained). Count every distinct region and report the total as a positive integer. Provide your final answer enclosed in <answer>...</answer> tags.",
|
| 29 |
+
"answer": 20
|
| 30 |
+
}
|
| 31 |
+
]
|
| 32 |
+
}
|
code/distributed_scanning/tangled_loops/creation.md
ADDED
|
@@ -0,0 +1,107 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Tangled Closed-Loop Counting
|
| 2 |
+
|
| 3 |
+
## Goal
|
| 4 |
+
|
| 5 |
+
Render a tangle of smooth **closed loops** on a single canvas. Every
|
| 6 |
+
loop is drawn in the **same** dark colour, so the model cannot recover
|
| 7 |
+
the count by colour-segmenting the image. Loops cross each other and
|
| 8 |
+
themselves freely, but never run parallel for long stretches — every
|
| 9 |
+
loop remains individually traceable.
|
| 10 |
+
|
| 11 |
+
This is a *distributed scanning* task: there is no single starting
|
| 12 |
+
point. The model must look across the whole image, follow each loop
|
| 13 |
+
all the way around, and report how many distinct closed loops exist.
|
| 14 |
+
|
| 15 |
+
## Differs from `sequential_traversal/tube_connection`
|
| 16 |
+
|
| 17 |
+
`tube_connection` labels endpoints and asks "which top label connects
|
| 18 |
+
to which bottom label", which is solved by tracing one path at a time.
|
| 19 |
+
Here there are no labels, no endpoints at all, and no per-loop tracing
|
| 20 |
+
question. The model needs the global *count* of distinct closed loops.
|
| 21 |
+
|
| 22 |
+
## Differs from `sequential_traversal/line_intersections`
|
| 23 |
+
|
| 24 |
+
`line_intersections` uses perimeter-anchored open curves and asks for
|
| 25 |
+
crossing sequences along one labelled string. Here every curve is a
|
| 26 |
+
closed loop (no endpoints anywhere), every loop is anonymous, all in
|
| 27 |
+
the same colour, and the only ground truth is the number of loops.
|
| 28 |
+
|
| 29 |
+
## Question
|
| 30 |
+
|
| 31 |
+
> How many distinct closed loops are tangled together in this image?
|
| 32 |
+
> Each loop is a single continuous curve that closes back on itself —
|
| 33 |
+
> there are no loose endpoints anywhere. All loops are drawn in the
|
| 34 |
+
> same colour and may cross other loops or themselves freely. Count
|
| 35 |
+
> the total number of distinct closed loops and report the count as a
|
| 36 |
+
> positive integer.
|
| 37 |
+
|
| 38 |
+
## Generation Procedure
|
| 39 |
+
|
| 40 |
+
1. **Sample N** in `[min_loops, max_loops]` (default 5–11).
|
| 41 |
+
2. **For each loop**, build a smooth closed curve:
|
| 42 |
+
- Pick a random interior centre so the loop fits inside
|
| 43 |
+
`interior_margin` (default 55 px) of the canvas edge.
|
| 44 |
+
- Sample 6–10 waypoints on a jittered ring around that centre
|
| 45 |
+
(angle jitter ≈ 0.42 rad, radius jitter ≈ 32 % of the mean
|
| 46 |
+
radius, mean radius sampled from 150–275 px).
|
| 47 |
+
- Fit a periodic cubic B-spline through the waypoints
|
| 48 |
+
(`scipy.interpolate.splprep(..., per=True, k=3)`) and
|
| 49 |
+
dense-sample 900 points along it. The last point is snapped to
|
| 50 |
+
equal the first so the renderer draws a genuine closed curve.
|
| 51 |
+
3. **Find all crossings** (self and pair) — used only by the
|
| 52 |
+
close-approach validator below.
|
| 53 |
+
4. **Reject** the sample if any two curves come within ~9 px of each
|
| 54 |
+
other anywhere except at a real crossing point. For self-closeness
|
| 55 |
+
the segment-index gap is computed **circularly** (with wraparound),
|
| 56 |
+
since closed curves have no linear endpoint ordering.
|
| 57 |
+
5. **Render** all loops in the same dark colour onto an off-white
|
| 58 |
+
canvas with subtle background noise.
|
| 59 |
+
|
| 60 |
+
## Anti-Shortcut Notes
|
| 61 |
+
|
| 62 |
+
The task has been through two shortcut-plugging rewrites:
|
| 63 |
+
|
| 64 |
+
1. **Unique-colour shortcut → single stroke colour.** An earlier
|
| 65 |
+
version assigned each string a unique colour, collapsing the task
|
| 66 |
+
to "count distinct colours". Fixed by drawing every curve in the
|
| 67 |
+
same dark colour.
|
| 68 |
+
2. **Skeleton-endpoint shortcut → closed loops.** The previous
|
| 69 |
+
perimeter-anchored design had each string as an *open* curve with
|
| 70 |
+
two endpoints on the border. A ten-line attack recovered the
|
| 71 |
+
answer on 28 / 30 samples: `skeletonize(gray < 110)` →
|
| 72 |
+
count pixels with exactly one 8-neighbour → divide by two.
|
| 73 |
+
Crossings are X/T junctions (degree ≥ 3), so only the two terminal
|
| 74 |
+
endpoints of each string have degree 1, and endpoints ÷ 2 is the
|
| 75 |
+
string count. Moving to closed curves eliminates the attack
|
| 76 |
+
entirely — a closed loop's skeleton has no degree-1 pixels, so the
|
| 77 |
+
attack returns 0 regardless of loop count. Verified on the
|
| 78 |
+
regenerated dataset.
|
| 79 |
+
3. **No-near-parallel validation** (retained) prevents two loops that
|
| 80 |
+
run parallel for a long stretch from reading as one fat loop,
|
| 81 |
+
which would also confuse a human counter.
|
| 82 |
+
|
| 83 |
+
## Annotation Format
|
| 84 |
+
|
| 85 |
+
```json
|
| 86 |
+
{
|
| 87 |
+
"image": "images/tangled_loops_00000.png",
|
| 88 |
+
"width": 1024,
|
| 89 |
+
"height": 1024,
|
| 90 |
+
"num_loops": 7,
|
| 91 |
+
"question": "How many distinct closed loops are tangled together ...",
|
| 92 |
+
"answer": 7
|
| 93 |
+
}
|
| 94 |
+
```
|
| 95 |
+
|
| 96 |
+
## Output Organization
|
| 97 |
+
|
| 98 |
+
```text
|
| 99 |
+
tangled_loops/
|
| 100 |
+
creation.py
|
| 101 |
+
creation.md
|
| 102 |
+
annotations.jsonl
|
| 103 |
+
data.json
|
| 104 |
+
images/
|
| 105 |
+
tangled_loops_00000.png
|
| 106 |
+
...
|
| 107 |
+
```
|
code/distributed_scanning/tangled_loops/creation.py
ADDED
|
@@ -0,0 +1,531 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
"""Tangled Closed-Loop Counting.
|
| 2 |
+
|
| 3 |
+
Each string is a smooth **closed curve** living entirely in the interior
|
| 4 |
+
of the canvas. Loops are generated by sampling interior waypoints on a
|
| 5 |
+
jittered ring around a random centre and fitting a periodic cubic
|
| 6 |
+
B-spline through them — the resulting curve has no endpoints at all,
|
| 7 |
+
which plugs the "skeletonize → count degree-1 pixels → divide by 2"
|
| 8 |
+
shortcut that beat the previous perimeter-anchored design.
|
| 9 |
+
|
| 10 |
+
All loops render in the same dark colour. Loops may cross other loops
|
| 11 |
+
or themselves, but long parallel runs are rejected so every loop stays
|
| 12 |
+
individually traceable.
|
| 13 |
+
|
| 14 |
+
The model is asked to count the total number of distinct closed loops.
|
| 15 |
+
"""
|
| 16 |
+
from __future__ import annotations
|
| 17 |
+
|
| 18 |
+
import argparse
|
| 19 |
+
import json
|
| 20 |
+
import math
|
| 21 |
+
import os
|
| 22 |
+
import random
|
| 23 |
+
import sys
|
| 24 |
+
from collections import defaultdict
|
| 25 |
+
from pathlib import Path
|
| 26 |
+
from typing import Dict, List, Tuple
|
| 27 |
+
|
| 28 |
+
import matplotlib
|
| 29 |
+
matplotlib.use("Agg")
|
| 30 |
+
import matplotlib.pyplot as plt
|
| 31 |
+
import numpy as np
|
| 32 |
+
from scipy.interpolate import splev, splprep
|
| 33 |
+
from tqdm import tqdm
|
| 34 |
+
|
| 35 |
+
|
| 36 |
+
LINE_COLOR = "#2f2f2f"
|
| 37 |
+
|
| 38 |
+
|
| 39 |
+
# ── Closed-loop construction ───────────────────────────────────────
|
| 40 |
+
|
| 41 |
+
def build_closed_loop(
|
| 42 |
+
rng: random.Random,
|
| 43 |
+
width: int,
|
| 44 |
+
height: int,
|
| 45 |
+
interior_margin: int = 55,
|
| 46 |
+
num_waypoints_range: Tuple[int, int] = (6, 10),
|
| 47 |
+
radius_range: Tuple[float, float] = (130.0, 215.0),
|
| 48 |
+
radius_jitter: float = 0.30,
|
| 49 |
+
angle_jitter: float = 0.42,
|
| 50 |
+
num_samples: int = 520,
|
| 51 |
+
existing_centers: List[Tuple[float, float]] | None = None,
|
| 52 |
+
min_center_gap: float = 190.0,
|
| 53 |
+
center_placement_attempts: int = 80,
|
| 54 |
+
) -> Tuple[np.ndarray, Tuple[float, float]] | None:
|
| 55 |
+
"""Build one smooth closed curve through jittered ring waypoints.
|
| 56 |
+
|
| 57 |
+
Returns ``(polyline, (cx, cy))`` where the polyline's last point
|
| 58 |
+
equals its first, or ``None`` if no valid placement was found.
|
| 59 |
+
"""
|
| 60 |
+
num_wp = rng.randint(*num_waypoints_range)
|
| 61 |
+
r_mean = rng.uniform(*radius_range)
|
| 62 |
+
max_r = r_mean * (1 + radius_jitter)
|
| 63 |
+
|
| 64 |
+
low_x = interior_margin + max_r
|
| 65 |
+
high_x = width - interior_margin - max_r
|
| 66 |
+
low_y = interior_margin + max_r
|
| 67 |
+
high_y = height - interior_margin - max_r
|
| 68 |
+
if low_x >= high_x or low_y >= high_y:
|
| 69 |
+
return None
|
| 70 |
+
|
| 71 |
+
cx = cy = None
|
| 72 |
+
centers = existing_centers or []
|
| 73 |
+
for _ in range(center_placement_attempts):
|
| 74 |
+
cand_x = rng.uniform(low_x, high_x)
|
| 75 |
+
cand_y = rng.uniform(low_y, high_y)
|
| 76 |
+
ok = True
|
| 77 |
+
for ex_x, ex_y in centers:
|
| 78 |
+
if (cand_x - ex_x) ** 2 + (cand_y - ex_y) ** 2 < min_center_gap ** 2:
|
| 79 |
+
ok = False
|
| 80 |
+
break
|
| 81 |
+
if ok:
|
| 82 |
+
cx, cy = cand_x, cand_y
|
| 83 |
+
break
|
| 84 |
+
if cx is None:
|
| 85 |
+
return None
|
| 86 |
+
|
| 87 |
+
base_angles = np.linspace(0.0, 2 * math.pi, num_wp, endpoint=False)
|
| 88 |
+
phase = rng.uniform(0.0, 2 * math.pi)
|
| 89 |
+
|
| 90 |
+
xs: List[float] = []
|
| 91 |
+
ys: List[float] = []
|
| 92 |
+
for base in base_angles:
|
| 93 |
+
ang = base + phase + rng.uniform(-angle_jitter, angle_jitter)
|
| 94 |
+
r = r_mean * (1.0 + rng.uniform(-radius_jitter, radius_jitter))
|
| 95 |
+
xs.append(cx + r * math.cos(ang))
|
| 96 |
+
ys.append(cy + r * math.sin(ang))
|
| 97 |
+
|
| 98 |
+
# splprep with per=True expects the input to already be closed.
|
| 99 |
+
xs.append(xs[0])
|
| 100 |
+
ys.append(ys[0])
|
| 101 |
+
|
| 102 |
+
try:
|
| 103 |
+
tck, _ = splprep([xs, ys], s=0.0, per=True, k=3)
|
| 104 |
+
except (TypeError, ValueError):
|
| 105 |
+
return None
|
| 106 |
+
|
| 107 |
+
u_dense = np.linspace(0.0, 1.0, num_samples)
|
| 108 |
+
x_dense, y_dense = splev(u_dense, tck)
|
| 109 |
+
poly = np.column_stack([np.asarray(x_dense), np.asarray(y_dense)])
|
| 110 |
+
|
| 111 |
+
if (poly[:, 0].min() < interior_margin - 5
|
| 112 |
+
or poly[:, 0].max() > width - interior_margin + 5
|
| 113 |
+
or poly[:, 1].min() < interior_margin - 5
|
| 114 |
+
or poly[:, 1].max() > height - interior_margin + 5):
|
| 115 |
+
return None
|
| 116 |
+
|
| 117 |
+
poly[-1] = poly[0]
|
| 118 |
+
return poly, (cx, cy)
|
| 119 |
+
|
| 120 |
+
|
| 121 |
+
# ── Crossing detection (used for the close-approach validation) ────
|
| 122 |
+
|
| 123 |
+
def _segments_cross(p0, p1, q0, q1) -> bool:
|
| 124 |
+
eps = 1e-8
|
| 125 |
+
o1 = (p1[0]-p0[0])*(q0[1]-p0[1]) - (p1[1]-p0[1])*(q0[0]-p0[0])
|
| 126 |
+
o2 = (p1[0]-p0[0])*(q1[1]-p0[1]) - (p1[1]-p0[1])*(q1[0]-p0[0])
|
| 127 |
+
o3 = (q1[0]-q0[0])*(p0[1]-q0[1]) - (q1[1]-q0[1])*(p0[0]-q0[0])
|
| 128 |
+
o4 = (q1[0]-q0[0])*(p1[1]-q0[1]) - (q1[1]-q0[1])*(p1[0]-q0[0])
|
| 129 |
+
return ((o1 > eps and o2 < -eps) or (o1 < -eps and o2 > eps)) and \
|
| 130 |
+
((o3 > eps and o4 < -eps) or (o3 < -eps and o4 > eps))
|
| 131 |
+
|
| 132 |
+
|
| 133 |
+
def _circular_seg_gap(i: int, j: int, n: int) -> int:
|
| 134 |
+
d = abs(i - j)
|
| 135 |
+
return min(d, n - d)
|
| 136 |
+
|
| 137 |
+
|
| 138 |
+
def _find_crossings(
|
| 139 |
+
poly_a: np.ndarray,
|
| 140 |
+
poly_b: np.ndarray,
|
| 141 |
+
same_curve: bool = False,
|
| 142 |
+
min_seg_gap: int = 10,
|
| 143 |
+
) -> List[Dict]:
|
| 144 |
+
na = len(poly_a) - 1
|
| 145 |
+
nb = len(poly_b) - 1
|
| 146 |
+
|
| 147 |
+
a_min_x = np.minimum(poly_a[:-1, 0], poly_a[1:, 0])
|
| 148 |
+
a_max_x = np.maximum(poly_a[:-1, 0], poly_a[1:, 0])
|
| 149 |
+
a_min_y = np.minimum(poly_a[:-1, 1], poly_a[1:, 1])
|
| 150 |
+
a_max_y = np.maximum(poly_a[:-1, 1], poly_a[1:, 1])
|
| 151 |
+
|
| 152 |
+
b_min_x = np.minimum(poly_b[:-1, 0], poly_b[1:, 0])
|
| 153 |
+
b_max_x = np.maximum(poly_b[:-1, 0], poly_b[1:, 0])
|
| 154 |
+
b_min_y = np.minimum(poly_b[:-1, 1], poly_b[1:, 1])
|
| 155 |
+
b_max_y = np.maximum(poly_b[:-1, 1], poly_b[1:, 1])
|
| 156 |
+
|
| 157 |
+
cell_size = max(np.median(np.concatenate([a_max_x - a_min_x,
|
| 158 |
+
b_max_x - b_min_x])), 1.0) * 3
|
| 159 |
+
|
| 160 |
+
grid_b = defaultdict(list)
|
| 161 |
+
for j in range(nb):
|
| 162 |
+
cx0 = int(b_min_x[j] / cell_size); cx1 = int(b_max_x[j] / cell_size)
|
| 163 |
+
cy0 = int(b_min_y[j] / cell_size); cy1 = int(b_max_y[j] / cell_size)
|
| 164 |
+
for gx in range(cx0, cx1 + 1):
|
| 165 |
+
for gy in range(cy0, cy1 + 1):
|
| 166 |
+
grid_b[(gx, gy)].append(j)
|
| 167 |
+
|
| 168 |
+
checked = set()
|
| 169 |
+
details: List[Dict] = []
|
| 170 |
+
for i in range(na):
|
| 171 |
+
cx0 = int(a_min_x[i] / cell_size); cx1 = int(a_max_x[i] / cell_size)
|
| 172 |
+
cy0 = int(a_min_y[i] / cell_size); cy1 = int(a_max_y[i] / cell_size)
|
| 173 |
+
for gx in range(cx0, cx1 + 1):
|
| 174 |
+
for gy in range(cy0, cy1 + 1):
|
| 175 |
+
if (gx, gy) not in grid_b:
|
| 176 |
+
continue
|
| 177 |
+
for j in grid_b[(gx, gy)]:
|
| 178 |
+
if same_curve:
|
| 179 |
+
ii, jj = min(i, j), max(i, j)
|
| 180 |
+
# Closed curve: neighbours wrap around.
|
| 181 |
+
if _circular_seg_gap(ii, jj, na) < min_seg_gap:
|
| 182 |
+
continue
|
| 183 |
+
key = (ii, jj)
|
| 184 |
+
else:
|
| 185 |
+
key = (i, j)
|
| 186 |
+
if key in checked:
|
| 187 |
+
continue
|
| 188 |
+
checked.add(key)
|
| 189 |
+
if same_curve:
|
| 190 |
+
si, sj = key
|
| 191 |
+
else:
|
| 192 |
+
si, sj = i, j
|
| 193 |
+
p0, p1 = poly_a[si], poly_a[si + 1]
|
| 194 |
+
q0, q1 = poly_b[sj], poly_b[sj + 1]
|
| 195 |
+
if not _segments_cross(p0, p1, q0, q1):
|
| 196 |
+
continue
|
| 197 |
+
d1 = p1 - p0
|
| 198 |
+
d2 = q1 - q0
|
| 199 |
+
denom = d1[0] * d2[1] - d1[1] * d2[0]
|
| 200 |
+
if abs(denom) < 1e-12:
|
| 201 |
+
continue
|
| 202 |
+
ti = ((q0[0] - p0[0]) * d2[1] - (q0[1] - p0[1]) * d2[0]) / denom
|
| 203 |
+
px = float(p0[0] + ti * d1[0])
|
| 204 |
+
py = float(p0[1] + ti * d1[1])
|
| 205 |
+
details.append({"px": px, "py": py})
|
| 206 |
+
return details
|
| 207 |
+
|
| 208 |
+
|
| 209 |
+
def _details_to_points(details: List[Dict]) -> np.ndarray:
|
| 210 |
+
if not details:
|
| 211 |
+
return np.zeros((0, 2))
|
| 212 |
+
return np.array([[d["px"], d["py"]] for d in details])
|
| 213 |
+
|
| 214 |
+
|
| 215 |
+
def _curves_too_close(
|
| 216 |
+
poly_a: np.ndarray,
|
| 217 |
+
poly_b: np.ndarray,
|
| 218 |
+
same_curve: bool = False,
|
| 219 |
+
min_dist: float = 7.0,
|
| 220 |
+
sample_step: int = 3,
|
| 221 |
+
self_index_gap: int = 30,
|
| 222 |
+
known_crossings: np.ndarray | None = None,
|
| 223 |
+
crossing_exclude_radius: float = 55.0,
|
| 224 |
+
) -> bool:
|
| 225 |
+
nb = len(poly_b) - 1
|
| 226 |
+
b_pts = poly_b[:-1]
|
| 227 |
+
b_vecs = poly_b[1:] - poly_b[:-1]
|
| 228 |
+
b_lens_sq = np.maximum((b_vecs ** 2).sum(axis=1), 1e-12)
|
| 229 |
+
|
| 230 |
+
has_crossings = known_crossings is not None and len(known_crossings) > 0
|
| 231 |
+
|
| 232 |
+
for idx in range(0, len(poly_a), sample_step):
|
| 233 |
+
px, py = poly_a[idx]
|
| 234 |
+
p = np.array([px, py])
|
| 235 |
+
dp = p - b_pts
|
| 236 |
+
t = (dp * b_vecs).sum(axis=1) / b_lens_sq
|
| 237 |
+
t = np.clip(t, 0.0, 1.0)
|
| 238 |
+
proj = b_pts + t[:, None] * b_vecs
|
| 239 |
+
dists = np.sqrt(((p - proj) ** 2).sum(axis=1))
|
| 240 |
+
if same_curve:
|
| 241 |
+
# Closed curve: mask neighbours with wraparound distance.
|
| 242 |
+
seg_idx = np.arange(nb)
|
| 243 |
+
lin = np.abs(seg_idx - idx)
|
| 244 |
+
circ = np.minimum(lin, nb - lin)
|
| 245 |
+
mask = circ < self_index_gap
|
| 246 |
+
dists[mask] = 9999.0
|
| 247 |
+
if dists.min() < min_dist:
|
| 248 |
+
if has_crossings:
|
| 249 |
+
cross_dists = np.sqrt(((p - known_crossings) ** 2).sum(axis=1))
|
| 250 |
+
if cross_dists.min() < crossing_exclude_radius:
|
| 251 |
+
continue
|
| 252 |
+
return True
|
| 253 |
+
return False
|
| 254 |
+
|
| 255 |
+
|
| 256 |
+
# ── Instance sampling ──────────────────────────────────────────────
|
| 257 |
+
|
| 258 |
+
QUESTION = (
|
| 259 |
+
"How many distinct closed loops are tangled together in this image? "
|
| 260 |
+
"Each loop is a single continuous curve that closes back on itself — "
|
| 261 |
+
"there are no loose endpoints anywhere. All loops are drawn in the "
|
| 262 |
+
"same colour and may cross other loops or themselves freely. Count "
|
| 263 |
+
"the total number of distinct closed loops and report the count as "
|
| 264 |
+
"a positive integer. "
|
| 265 |
+
"Provide your final answer enclosed in <answer>...</answer> tags."
|
| 266 |
+
)
|
| 267 |
+
|
| 268 |
+
|
| 269 |
+
def _min_crossing_angle_deg(poly_a: np.ndarray, poly_b: np.ndarray,
|
| 270 |
+
details: List[Dict], same_curve: bool = False) -> float:
|
| 271 |
+
"""Return the smallest crossing angle (deg) across all crossings, or
|
| 272 |
+
180.0 if there are no crossings."""
|
| 273 |
+
if not details:
|
| 274 |
+
return 180.0
|
| 275 |
+
# Re-detect with segment indices for angle computation.
|
| 276 |
+
na = len(poly_a) - 1
|
| 277 |
+
nb = len(poly_b) - 1
|
| 278 |
+
min_angle = 180.0
|
| 279 |
+
for i in range(na):
|
| 280 |
+
p0, p1 = poly_a[i], poly_a[i + 1]
|
| 281 |
+
for j in range(nb):
|
| 282 |
+
if same_curve:
|
| 283 |
+
ii, jj = min(i, j), max(i, j)
|
| 284 |
+
if _circular_seg_gap(ii, jj, na) < 10:
|
| 285 |
+
continue
|
| 286 |
+
q0, q1 = poly_b[j], poly_b[j + 1]
|
| 287 |
+
if not _segments_cross(p0, p1, q0, q1):
|
| 288 |
+
continue
|
| 289 |
+
d1 = p1 - p0
|
| 290 |
+
d2 = q1 - q0
|
| 291 |
+
n1 = math.hypot(d1[0], d1[1])
|
| 292 |
+
n2 = math.hypot(d2[0], d2[1])
|
| 293 |
+
if n1 < 1e-9 or n2 < 1e-9:
|
| 294 |
+
continue
|
| 295 |
+
cos_a = (d1[0] * d2[0] + d1[1] * d2[1]) / (n1 * n2)
|
| 296 |
+
cos_a = max(-1.0, min(1.0, cos_a))
|
| 297 |
+
ang = math.degrees(math.acos(abs(cos_a)))
|
| 298 |
+
if ang < min_angle:
|
| 299 |
+
min_angle = ang
|
| 300 |
+
return min_angle
|
| 301 |
+
|
| 302 |
+
|
| 303 |
+
def sample_instance(
|
| 304 |
+
rng: random.Random,
|
| 305 |
+
width: int,
|
| 306 |
+
height: int,
|
| 307 |
+
num_loops: int,
|
| 308 |
+
interior_margin: int = 55,
|
| 309 |
+
max_attempts: int = 600,
|
| 310 |
+
min_inter_crossings: int = 0,
|
| 311 |
+
max_self_crossings_per_loop: int = 0,
|
| 312 |
+
min_crossing_angle_deg: float = 30.0,
|
| 313 |
+
) -> Dict | None:
|
| 314 |
+
for _ in range(max_attempts):
|
| 315 |
+
polylines: List[np.ndarray] = []
|
| 316 |
+
centers: List[Tuple[float, float]] = []
|
| 317 |
+
build_failed = False
|
| 318 |
+
for _ in range(num_loops):
|
| 319 |
+
result = None
|
| 320 |
+
for _ in range(40):
|
| 321 |
+
result = build_closed_loop(
|
| 322 |
+
rng, width, height,
|
| 323 |
+
interior_margin=interior_margin,
|
| 324 |
+
existing_centers=centers,
|
| 325 |
+
)
|
| 326 |
+
if result is not None:
|
| 327 |
+
break
|
| 328 |
+
if result is None:
|
| 329 |
+
build_failed = True
|
| 330 |
+
break
|
| 331 |
+
poly, centre = result
|
| 332 |
+
polylines.append(poly)
|
| 333 |
+
centers.append(centre)
|
| 334 |
+
if build_failed:
|
| 335 |
+
continue
|
| 336 |
+
|
| 337 |
+
self_details: List[List[Dict]] = []
|
| 338 |
+
pair_details: Dict[Tuple[int, int], List[Dict]] = {}
|
| 339 |
+
for a in range(num_loops):
|
| 340 |
+
self_details.append(_find_crossings(polylines[a], polylines[a],
|
| 341 |
+
same_curve=True))
|
| 342 |
+
for b in range(a + 1, num_loops):
|
| 343 |
+
pair_details[(a, b)] = _find_crossings(polylines[a], polylines[b])
|
| 344 |
+
|
| 345 |
+
# Enforce: no self-crossings beyond allowed limit.
|
| 346 |
+
if any(len(sd) > max_self_crossings_per_loop for sd in self_details):
|
| 347 |
+
continue
|
| 348 |
+
|
| 349 |
+
# Enforce: total inter-loop crossings >= min_inter_crossings.
|
| 350 |
+
total_inter = sum(len(v) for v in pair_details.values())
|
| 351 |
+
if total_inter < min_inter_crossings:
|
| 352 |
+
continue
|
| 353 |
+
|
| 354 |
+
# Enforce: every crossing has angle >= min_crossing_angle_deg.
|
| 355 |
+
bad_angle = False
|
| 356 |
+
for a in range(num_loops):
|
| 357 |
+
if _min_crossing_angle_deg(polylines[a], polylines[a],
|
| 358 |
+
self_details[a], same_curve=True) < min_crossing_angle_deg:
|
| 359 |
+
bad_angle = True
|
| 360 |
+
break
|
| 361 |
+
for b in range(a + 1, num_loops):
|
| 362 |
+
if _min_crossing_angle_deg(polylines[a], polylines[b],
|
| 363 |
+
pair_details[(a, b)]) < min_crossing_angle_deg:
|
| 364 |
+
bad_angle = True
|
| 365 |
+
break
|
| 366 |
+
if bad_angle:
|
| 367 |
+
break
|
| 368 |
+
if bad_angle:
|
| 369 |
+
continue
|
| 370 |
+
|
| 371 |
+
too_close = False
|
| 372 |
+
for a in range(num_loops):
|
| 373 |
+
if _curves_too_close(polylines[a], polylines[a], same_curve=True,
|
| 374 |
+
known_crossings=_details_to_points(self_details[a])):
|
| 375 |
+
too_close = True
|
| 376 |
+
break
|
| 377 |
+
for b in range(a + 1, num_loops):
|
| 378 |
+
if _curves_too_close(polylines[a], polylines[b],
|
| 379 |
+
known_crossings=_details_to_points(pair_details[(a, b)])):
|
| 380 |
+
too_close = True
|
| 381 |
+
break
|
| 382 |
+
if too_close:
|
| 383 |
+
break
|
| 384 |
+
if too_close:
|
| 385 |
+
continue
|
| 386 |
+
|
| 387 |
+
return {
|
| 388 |
+
"width": width,
|
| 389 |
+
"height": height,
|
| 390 |
+
"num_loops": num_loops,
|
| 391 |
+
"polylines": polylines,
|
| 392 |
+
"inter_loop_crossings": int(total_inter),
|
| 393 |
+
"question": QUESTION,
|
| 394 |
+
"answer": num_loops,
|
| 395 |
+
}
|
| 396 |
+
return None
|
| 397 |
+
|
| 398 |
+
|
| 399 |
+
# ── Rendering ──────────────────────────────────────────────────────
|
| 400 |
+
|
| 401 |
+
def render_instance(out_path: Path, record: Dict, noise_seed: int,
|
| 402 |
+
thickness: float) -> None:
|
| 403 |
+
width = int(record["width"])
|
| 404 |
+
height = int(record["height"])
|
| 405 |
+
polylines = record["polylines"]
|
| 406 |
+
|
| 407 |
+
fig = plt.figure(figsize=(width / 100, height / 100), dpi=100)
|
| 408 |
+
ax = fig.add_axes([0, 0, 1, 1])
|
| 409 |
+
ax.set_xlim(0, width)
|
| 410 |
+
ax.set_ylim(height, 0)
|
| 411 |
+
ax.axis("off")
|
| 412 |
+
ax.set_facecolor("#f8f6f0")
|
| 413 |
+
|
| 414 |
+
nrng = np.random.default_rng(noise_seed)
|
| 415 |
+
noise = nrng.normal(0.0, 1.0, size=(height, width))
|
| 416 |
+
noise = (noise - noise.min()) / max(noise.max() - noise.min(), 1e-6)
|
| 417 |
+
ax.imshow(noise, cmap="Greys", alpha=0.05, extent=(0, width, height, 0),
|
| 418 |
+
interpolation="bilinear")
|
| 419 |
+
|
| 420 |
+
for poly in polylines:
|
| 421 |
+
ax.plot(poly[:, 0], poly[:, 1],
|
| 422 |
+
color=LINE_COLOR, linewidth=thickness, alpha=0.92,
|
| 423 |
+
solid_capstyle="round", solid_joinstyle="round",
|
| 424 |
+
zorder=2.0)
|
| 425 |
+
|
| 426 |
+
fig.savefig(out_path, dpi=100, bbox_inches="tight", pad_inches=0)
|
| 427 |
+
plt.close(fig)
|
| 428 |
+
|
| 429 |
+
|
| 430 |
+
# ── Main ───────────────────────────────────────────────────────────
|
| 431 |
+
|
| 432 |
+
def main() -> None:
|
| 433 |
+
parser = argparse.ArgumentParser()
|
| 434 |
+
parser.add_argument("--output-root", type=Path, required=True)
|
| 435 |
+
parser.add_argument("--count", type=int, default=30)
|
| 436 |
+
parser.add_argument("--seed", type=int, default=42)
|
| 437 |
+
parser.add_argument("--width", type=int, default=1024)
|
| 438 |
+
parser.add_argument("--height", type=int, default=1024)
|
| 439 |
+
parser.add_argument("--min-loops", type=int, default=3)
|
| 440 |
+
parser.add_argument("--max-loops", type=int, default=5)
|
| 441 |
+
parser.add_argument("--thickness", type=float, default=2.0,
|
| 442 |
+
help="Absolute pixel thickness; never scaled.")
|
| 443 |
+
parser.add_argument("--difficulty", type=int, default=5,
|
| 444 |
+
help="Integer difficulty >=0; scales loop count.")
|
| 445 |
+
parser.add_argument("--workers", type=int, default=8,
|
| 446 |
+
help="Parallel worker processes for sampling. 1 = serial.")
|
| 447 |
+
args = parser.parse_args()
|
| 448 |
+
|
| 449 |
+
d = max(0, int(args.difficulty))
|
| 450 |
+
|
| 451 |
+
# Canvas scaling: N_d = 5 + d, N_0 = 5.
|
| 452 |
+
N_d = 5 + d
|
| 453 |
+
N_0 = 5
|
| 454 |
+
s = math.sqrt(max(1.0, N_d / N_0))
|
| 455 |
+
args.width = int(round(args.width * s))
|
| 456 |
+
args.height = int(round(args.height * s))
|
| 457 |
+
|
| 458 |
+
# num_loops ∈ [3, 5 + d]
|
| 459 |
+
args.min_loops = 10
|
| 460 |
+
args.max_loops = 10 + 2 * d
|
| 461 |
+
|
| 462 |
+
# Fixed constraints.
|
| 463 |
+
max_self_crossings_per_loop = 0
|
| 464 |
+
min_inter_crossings = 10 + 2 * d
|
| 465 |
+
min_crossing_angle_deg = 30.0
|
| 466 |
+
loop_thickness_px = 4.0 # absolute; do not scale
|
| 467 |
+
|
| 468 |
+
out_root: Path = args.output_root
|
| 469 |
+
img_dir = out_root / "images"
|
| 470 |
+
img_dir.mkdir(parents=True, exist_ok=True)
|
| 471 |
+
|
| 472 |
+
sys.path.insert(0, str(Path(__file__).resolve().parents[3]))
|
| 473 |
+
from _sample_pool import parallel_sample_records # noqa: E402
|
| 474 |
+
|
| 475 |
+
# Force evenly-spaced answer counts across [min_loops, max_loops].
|
| 476 |
+
if args.count > 1:
|
| 477 |
+
forced_targets = [
|
| 478 |
+
int(round(args.min_loops + i * (args.max_loops - args.min_loops) / (args.count - 1)))
|
| 479 |
+
for i in range(args.count)
|
| 480 |
+
]
|
| 481 |
+
else:
|
| 482 |
+
forced_targets = [args.min_loops]
|
| 483 |
+
print(f"forced loop counts: {forced_targets}")
|
| 484 |
+
|
| 485 |
+
records_raw = []
|
| 486 |
+
for ti, tgt in enumerate(forced_targets):
|
| 487 |
+
def _attempt(rng, _tgt=tgt):
|
| 488 |
+
rec = sample_instance(
|
| 489 |
+
rng, args.width, args.height, num_loops=_tgt,
|
| 490 |
+
min_inter_crossings=min_inter_crossings,
|
| 491 |
+
max_self_crossings_per_loop=max_self_crossings_per_loop,
|
| 492 |
+
min_crossing_angle_deg=min_crossing_angle_deg,
|
| 493 |
+
max_attempts=50,
|
| 494 |
+
)
|
| 495 |
+
return rec
|
| 496 |
+
sub = parallel_sample_records(
|
| 497 |
+
_attempt, count=1, workers=args.workers,
|
| 498 |
+
seed_base=args.seed + ti * 977,
|
| 499 |
+
)
|
| 500 |
+
records_raw.extend(sub)
|
| 501 |
+
rng_render = random.Random(args.seed ^ 0xA5A5)
|
| 502 |
+
records = []
|
| 503 |
+
for idx, record in enumerate(records_raw):
|
| 504 |
+
name = f"tangled_loops_{idx:05d}.png"
|
| 505 |
+
ns = rng_render.randint(0, 10**9)
|
| 506 |
+
render_instance(img_dir / name, record, noise_seed=ns,
|
| 507 |
+
thickness=loop_thickness_px)
|
| 508 |
+
record.pop("polylines")
|
| 509 |
+
record["image"] = f"images/{name}"
|
| 510 |
+
records.append(record)
|
| 511 |
+
print(f" {len(records)}/{args.count} valid samples (workers={args.workers})")
|
| 512 |
+
|
| 513 |
+
with (out_root / "annotations.jsonl").open("w") as fh:
|
| 514 |
+
for r in records:
|
| 515 |
+
fh.write(json.dumps(r) + "\n")
|
| 516 |
+
|
| 517 |
+
data_json = {
|
| 518 |
+
"task": "tangled_loops",
|
| 519 |
+
"category": "distributed_scanning",
|
| 520 |
+
"count": len(records),
|
| 521 |
+
"items": [
|
| 522 |
+
{"image": r["image"], "question": r["question"], "answer": r["answer"]}
|
| 523 |
+
for r in records
|
| 524 |
+
],
|
| 525 |
+
}
|
| 526 |
+
(out_root / "data.json").write_text(json.dumps(data_json, indent=2))
|
| 527 |
+
print(f"Saved {len(records)} items to {out_root}")
|
| 528 |
+
|
| 529 |
+
|
| 530 |
+
if __name__ == "__main__":
|
| 531 |
+
main()
|
code/distributed_scanning/tangled_loops/data.json
ADDED
|
@@ -0,0 +1,32 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"task": "tangled_loops",
|
| 3 |
+
"category": "distributed_scanning",
|
| 4 |
+
"count": 5,
|
| 5 |
+
"items": [
|
| 6 |
+
{
|
| 7 |
+
"image": "images/tangled_loops_00000.png",
|
| 8 |
+
"question": "How many distinct closed loops are tangled together in this image? Each loop is a continuous closed curve (rope, cord, cable, etc.) with no loose endpoints anywhere. Loops may cross other loops or themselves freely. Count the total number of distinct closed loops and report the count as a positive integer. Provide your final answer enclosed in <answer>...</answer> tags.",
|
| 9 |
+
"answer": 10
|
| 10 |
+
},
|
| 11 |
+
{
|
| 12 |
+
"image": "images/tangled_loops_00001.png",
|
| 13 |
+
"question": "How many distinct closed loops are tangled together in this image? Each loop is a continuous closed curve (rope, cord, cable, etc.) with no loose endpoints anywhere. Loops may cross other loops or themselves freely. Count the total number of distinct closed loops and report the count as a positive integer. Provide your final answer enclosed in <answer>...</answer> tags.",
|
| 14 |
+
"answer": 12
|
| 15 |
+
},
|
| 16 |
+
{
|
| 17 |
+
"image": "images/tangled_loops_00002.png",
|
| 18 |
+
"question": "How many distinct closed loops are tangled together in this image? Each loop is a continuous closed curve (rope, cord, cable, etc.) with no loose endpoints anywhere. Loops may cross other loops or themselves freely. Count the total number of distinct closed loops and report the count as a positive integer. Provide your final answer enclosed in <answer>...</answer> tags.",
|
| 19 |
+
"answer": 15
|
| 20 |
+
},
|
| 21 |
+
{
|
| 22 |
+
"image": "images/tangled_loops_00003.png",
|
| 23 |
+
"question": "How many distinct closed loops are tangled together in this image? Each loop is a continuous closed curve (rope, cord, cable, etc.) with no loose endpoints anywhere. Loops may cross other loops or themselves freely. Count the total number of distinct closed loops and report the count as a positive integer. Provide your final answer enclosed in <answer>...</answer> tags.",
|
| 24 |
+
"answer": 18
|
| 25 |
+
},
|
| 26 |
+
{
|
| 27 |
+
"image": "images/tangled_loops_00004.png",
|
| 28 |
+
"question": "How many distinct closed loops are tangled together in this image? Each loop is a continuous closed curve (rope, cord, cable, etc.) with no loose endpoints anywhere. Loops may cross other loops or themselves freely. Count the total number of distinct closed loops and report the count as a positive integer. Provide your final answer enclosed in <answer>...</answer> tags.",
|
| 29 |
+
"answer": 20
|
| 30 |
+
}
|
| 31 |
+
]
|
| 32 |
+
}
|
code/gpt_image_prompts.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
code/scope.md
ADDED
|
@@ -0,0 +1,38 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Active Vision Benchmark
|
| 2 |
+
|
| 3 |
+
Today's multimodal large language models (MLLMs) achieve strong performance on many vision-language tasks, but they typically process images as fixed embeddings. Human reasoning, by contrast, is often active: perception is continuously guided by intermediate reasoning. In psychology, active observers can readily solve tasks that are ill-posed for passive observers. In this paper, we investigate whether MLLMs can exhibit a similar form of active observation. We introduce a benchmark that requires iterative visual inspection, including distributed scanning, sequential traversal, and visual attribute transfer, where evidence must be accumulated across spatial locations and reasoning steps. State-of-the-art MLLMs show substantial performance drops on these tasks, and attention analysis reveals unstable and limited use of visual tokens during reasoning. These results suggest that current MLLMs lack robust active visual observation, motivating new methods and architectures for iterative, perception-driven reasoning.
|
| 4 |
+
|
| 5 |
+
|
| 6 |
+
### Distributed Scanning.
|
| 7 |
+
|
| 8 |
+
Tasks that require the model to inspect multiple spatially separated regions and aggregate locally identifiable evidence across the image. The main challenge is exhaustive visual coverage rather than global structural inference, as the answer is obtained by repeatedly finding and accumulating relevant local signals.
|
| 9 |
+
|
| 10 |
+
### Sequential Traversal.
|
| 11 |
+
|
| 12 |
+
Tasks that require the model to follow a path, line, wire, or other connected structure step by step while maintaining intermediate state. The answer depends on ordered inspection along the structure, such as tracing a route, identifying visited elements, or counting events encountered during traversal.
|
| 13 |
+
|
| 14 |
+
### Visual Attribute Transfer.
|
| 15 |
+
|
| 16 |
+
Tasks that require the model to extract a fine-grained visual property from one region and match or apply it to another region. The transferred property is primarily visual rather than linguistic, such as length, curvature, thickness, or spacing, and the task tests whether the model can preserve and compare such information across separated parts of the image.
|
| 17 |
+
|
| 18 |
+
### Requirement
|
| 19 |
+
|
| 20 |
+
A task qualifies for the benchmark only if it forces the model to *actively look at the image during reasoning*. Concretely, every task must defeat the following two shortcuts.
|
| 21 |
+
|
| 22 |
+
**Shortcut 1 — One-shot perception, then pure text reasoning.**
|
| 23 |
+
The model summarises the entire image in a single pass into a compact symbolic description (e.g. an adjacency list, a coordinate table, a grid of arrows), then solves the task by reasoning over that description without looking again. To block this, an instance must contain too much fine-grained visual state to be losslessly extracted in one pass at the resolutions models actually see. For example, a *Color Zone Sequence* image carries a continuous smooth curve whose region membership changes dozens of times along its length; verbalising every crossing in advance is itself the task. A *Connectivity Spotting* graph is dense enough that the model would have to verbalise the entire connectivity structure — at which point it is solving the task, not pre-extracting it. Difficulty should scale with image complexity (more zones, more arrows, more crossings) so that the one-shot description grows past what the model can reliably hold.
|
| 24 |
+
|
| 25 |
+
**Shortcut 2 — Write code to solve the task end-to-end.**
|
| 26 |
+
The model emits Python that runs OCR, edge detection, blob counting, or vector tracing on the raw image and returns the answer without further reasoning. To block this, the visual primitives must require human-style perception that off-the-shelf CV libraries do not solve cleanly: smoothly anti-aliased turtle curves rather than crisp lines, color-only region boundaries with no contour to detect, hand-placed arrows pointing in arbitrary directions, weighted-graph layouts whose edges are styled rather than thresholded. The answer should also depend on a continuous tracing or counting decision (which arrow comes next, which region is visited 4th) that scripted pipelines get wrong on the soft, irregular renderings used here.
|
| 27 |
+
|
| 28 |
+
**Shortcut 3 — Statistical priors / answer-distribution leakage.**
|
| 29 |
+
Without looking, the model guesses the modal answer for the task type (e.g. "shortest paths are usually 3 hops", "arrow chains usually terminate at A"). To block this, per-task answer distributions should be flat, with no correlation between trivially extractable image features (region count, canvas size, color palette) and the answer. Each task's annotations should be auditable for this.
|
| 30 |
+
|
| 31 |
+
**Shortcut 4 — Gestalt heuristics instead of step-by-step tracing.**
|
| 32 |
+
The model glances at the image and uses a learned visual prior — "the green arrow's circle is on the left, the red terminus is on the right, so go right" — without actually executing the traversal. This is especially dangerous on Sequential Traversal: the model can short-circuit a 6-hop arrow chain by interpolating from start-position and end-position. To block this, insert decoys and detours so the geometric "obvious" path is wrong, the answer requires more than half the canvas to be inspected, and the traversal length is long enough that intermediate state is required (mental running count, current-region tracking).
|
| 33 |
+
|
| 34 |
+
**Shortcut 5 — Memorisation of the released benchmark.**
|
| 35 |
+
Once the dataset is public, models can be fine-tuned on it directly. To block this, the generation pipeline is the artifact. We release `creation.py` with a seed protocol so that *new* unseen instances can be regenerated by evaluators, and we keep a small held-out split with seeds never published. Difficulty scaling (more zones, longer chains) also means evaluators can request harder splits than the released set.
|
| 36 |
+
|
| 37 |
+
**Shortcut 6 — Agent-tool zoom / crop / OCR loops.**
|
| 38 |
+
A tool-using agent can repeatedly crop, upscale, or OCR sub-regions until the answer falls out, without exercising the model's own active visual reasoning. To block this in tool-using settings, the canvas must already be at a resolution where a single zoom does not localise the answer, the answer should depend on integrating evidence across non-adjacent regions, and any single crop must omit information that another crop also needs.
|
code/sequential_traversal/arrow_chain/airplane_stamps/00.png
ADDED
|
Git LFS Details
|
code/sequential_traversal/arrow_chain/airplane_stamps/01.png
ADDED
|
Git LFS Details
|
code/sequential_traversal/arrow_chain/airplane_stamps/02.png
ADDED
|
Git LFS Details
|
code/sequential_traversal/arrow_chain/airplane_stamps/03.png
ADDED
|
Git LFS Details
|
code/sequential_traversal/arrow_chain/airplane_stamps/04.png
ADDED
|
Git LFS Details
|
code/sequential_traversal/arrow_chain/airplane_stamps/05.png
ADDED
|
Git LFS Details
|
code/sequential_traversal/arrow_chain/airplane_template_grid.png
ADDED
|
Git LFS Details
|
code/sequential_traversal/arrow_chain/bird_stamps/00.png
ADDED
|
Git LFS Details
|
code/sequential_traversal/arrow_chain/bird_stamps/01.png
ADDED
|
Git LFS Details
|
code/sequential_traversal/arrow_chain/bird_stamps/02.png
ADDED
|
Git LFS Details
|
code/sequential_traversal/arrow_chain/bird_stamps/03.png
ADDED
|
Git LFS Details
|
code/sequential_traversal/arrow_chain/bird_stamps/04.png
ADDED
|
Git LFS Details
|
code/sequential_traversal/arrow_chain/bird_stamps/05.png
ADDED
|
Git LFS Details
|
code/sequential_traversal/arrow_chain/bird_template_grid.png
ADDED
|
Git LFS Details
|
code/sequential_traversal/arrow_chain/creation.py
ADDED
|
@@ -0,0 +1,700 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
"""Arrow Chain Traversal (irregular scattered arrows).
|
| 2 |
+
|
| 3 |
+
Small arrows are scattered across an open canvas. Each arrow is enclosed
|
| 4 |
+
in its own circle, and points at the circle's center — so a ray extended
|
| 5 |
+
from the arrow through the centre exits the circle on the opposite side.
|
| 6 |
+
A handful of larger, labeled terminus circles sit among the arrows.
|
| 7 |
+
|
| 8 |
+
The traversal rule is pure first-hit ray casting:
|
| 9 |
+
|
| 10 |
+
- Start at the green arrow S.
|
| 11 |
+
- Draw a ray from the current arrow's centre along its pointing
|
| 12 |
+
direction. The next step is the FIRST other circle the ray enters.
|
| 13 |
+
- Continue until the ray enters a labeled terminus circle; report its
|
| 14 |
+
label.
|
| 15 |
+
|
| 16 |
+
The design deliberately avoids a grid. The "next element" is a global,
|
| 17 |
+
ray-dependent matching problem over every circle on the canvas, so the
|
| 18 |
+
transition table can only be reconstructed by doing the same geometric
|
| 19 |
+
work as the task itself — for every arrow. Decoys are placed everywhere
|
| 20 |
+
except in corridors that would interfere with the intended chain.
|
| 21 |
+
"""
|
| 22 |
+
from __future__ import annotations
|
| 23 |
+
|
| 24 |
+
import argparse
|
| 25 |
+
import json
|
| 26 |
+
import math
|
| 27 |
+
import os
|
| 28 |
+
import random
|
| 29 |
+
import string
|
| 30 |
+
from pathlib import Path
|
| 31 |
+
from typing import Dict, List, Tuple
|
| 32 |
+
|
| 33 |
+
import matplotlib
|
| 34 |
+
matplotlib.use("Agg")
|
| 35 |
+
import matplotlib.image as mpimg
|
| 36 |
+
import matplotlib.patches as mpatches
|
| 37 |
+
import matplotlib.pyplot as plt
|
| 38 |
+
import numpy as np
|
| 39 |
+
from scipy.ndimage import rotate as _ndimage_rotate
|
| 40 |
+
from tqdm import tqdm
|
| 41 |
+
|
| 42 |
+
# Stamp pools: each is a list of (template-image, default-rotation-offset-deg)
|
| 43 |
+
# where default-rotation tells the renderer how to interpret the template's
|
| 44 |
+
# natural orientation. The original foot stamp uses an offset of +50° because
|
| 45 |
+
# the photo's toes are tilted ~50° CCW from straight-up. The fish and key
|
| 46 |
+
# grids were generated with stamps facing straight up, so offset = 0°.
|
| 47 |
+
_FOOT_TEMPLATE: np.ndarray | None = None
|
| 48 |
+
_FOOT_TEMPLATE_PATH = (
|
| 49 |
+
Path(__file__).resolve().parents[2]
|
| 50 |
+
/ "visual_attribute_transfer/constellation_match_count/image.png"
|
| 51 |
+
)
|
| 52 |
+
_STAMP_POOLS: dict[str, tuple[list[np.ndarray], list[float]]] = {}
|
| 53 |
+
|
| 54 |
+
# Pool name read from $ARROW_CHAIN_STAMP (default "foot"). Options: foot, fish, key.
|
| 55 |
+
_STAMP_POOL_NAME = os.environ.get("ARROW_CHAIN_STAMP", "foot")
|
| 56 |
+
|
| 57 |
+
|
| 58 |
+
def _foot_template() -> np.ndarray:
|
| 59 |
+
global _FOOT_TEMPLATE
|
| 60 |
+
if _FOOT_TEMPLATE is None:
|
| 61 |
+
_FOOT_TEMPLATE = mpimg.imread(str(_FOOT_TEMPLATE_PATH))
|
| 62 |
+
return _FOOT_TEMPLATE
|
| 63 |
+
|
| 64 |
+
|
| 65 |
+
_BG_RGB = (0xf3 / 255.0, 0xef / 255.0, 0xe8 / 255.0) # matches scene BG
|
| 66 |
+
|
| 67 |
+
|
| 68 |
+
def _chroma_key_to_alpha(rgb: np.ndarray, tol: float = 0.16) -> np.ndarray:
|
| 69 |
+
"""Convert HxWx3 to HxWx4 with the stamp's dominant CORNER colour
|
| 70 |
+
keyed to alpha=0. Auto-detects whether the background is light (fish
|
| 71 |
+
paper) or dark (key leather). Critical so the stamp doesn't carry a
|
| 72 |
+
visible halo into the scene that gpt-image-2 would render as a
|
| 73 |
+
container."""
|
| 74 |
+
if rgb.ndim != 3 or rgb.shape[2] not in (3, 4):
|
| 75 |
+
return rgb
|
| 76 |
+
if rgb.shape[2] == 4:
|
| 77 |
+
return rgb
|
| 78 |
+
h, w = rgb.shape[:2]
|
| 79 |
+
# Sample 4 corners to estimate background colour.
|
| 80 |
+
corners = np.stack([rgb[0, 0], rgb[0, w-1], rgb[h-1, 0], rgb[h-1, w-1]])
|
| 81 |
+
bg = corners.mean(axis=0)
|
| 82 |
+
# Per-pixel distance to background colour (Euclidean in RGB).
|
| 83 |
+
dist = np.linalg.norm(rgb - bg, axis=2)
|
| 84 |
+
# Smooth alpha: fully transparent at dist <= tol*0.5, fully opaque at dist >= tol*1.5.
|
| 85 |
+
lo = tol * 0.5
|
| 86 |
+
hi = tol * 1.5
|
| 87 |
+
alpha = np.clip((dist - lo) / max(hi - lo, 1e-6), 0.0, 1.0).astype(rgb.dtype)
|
| 88 |
+
rgba = np.concatenate([rgb, alpha[..., None]], axis=2)
|
| 89 |
+
return rgba
|
| 90 |
+
|
| 91 |
+
|
| 92 |
+
def _pad_to_square(arr: np.ndarray, fill: tuple[float, float, float] | None = None) -> np.ndarray:
|
| 93 |
+
"""Pad an HxWxC array to a square so scipy.ndimage.rotate operates
|
| 94 |
+
isotropically. With RGBA arrays, padding is fully transparent."""
|
| 95 |
+
h, w = arr.shape[:2]
|
| 96 |
+
n = max(h, w)
|
| 97 |
+
if h == w == n:
|
| 98 |
+
return arr
|
| 99 |
+
if arr.ndim == 3:
|
| 100 |
+
c = arr.shape[2]
|
| 101 |
+
pad = np.zeros((n, n, c), dtype=arr.dtype)
|
| 102 |
+
if c == 4:
|
| 103 |
+
pad[..., 3] = 0.0 # transparent
|
| 104 |
+
elif fill is None:
|
| 105 |
+
for k in range(3):
|
| 106 |
+
pad[..., k] = _BG_RGB[k]
|
| 107 |
+
else:
|
| 108 |
+
for k in range(3):
|
| 109 |
+
pad[..., k] = fill[k]
|
| 110 |
+
else:
|
| 111 |
+
pad = np.full((n, n), fill[0] if fill else 1.0, dtype=arr.dtype)
|
| 112 |
+
y0 = (n - h) // 2
|
| 113 |
+
x0 = (n - w) // 2
|
| 114 |
+
pad[y0:y0+h, x0:x0+w] = arr
|
| 115 |
+
return pad
|
| 116 |
+
|
| 117 |
+
|
| 118 |
+
def _stamp_pool(name: str) -> tuple[list[np.ndarray], list[float]]:
|
| 119 |
+
"""Return (list-of-templates, list-of-rotation-offsets-deg) for the pool."""
|
| 120 |
+
if name in _STAMP_POOLS:
|
| 121 |
+
return _STAMP_POOLS[name]
|
| 122 |
+
base = Path(__file__).resolve().parent
|
| 123 |
+
if name == "foot":
|
| 124 |
+
templates = [_pad_to_square(_foot_template())]
|
| 125 |
+
offsets = [50.0] # foot photo tilted ~50° CCW vs straight-up
|
| 126 |
+
elif name in ("fish", "key", "airplane", "bird", "leaf"):
|
| 127 |
+
d = base / f"{name}_stamps"
|
| 128 |
+
files = sorted(d.glob("*.png"))
|
| 129 |
+
if not files:
|
| 130 |
+
raise FileNotFoundError(f"no stamps in {d}")
|
| 131 |
+
templates = [
|
| 132 |
+
_pad_to_square(_chroma_key_to_alpha(mpimg.imread(str(f))))
|
| 133 |
+
for f in files
|
| 134 |
+
]
|
| 135 |
+
# Per-stamp offsets stored by the calibration page.
|
| 136 |
+
offsets_path = d / "offsets.json"
|
| 137 |
+
per_stamp = {}
|
| 138 |
+
if offsets_path.exists():
|
| 139 |
+
try:
|
| 140 |
+
per_stamp = json.loads(offsets_path.read_text())
|
| 141 |
+
except Exception:
|
| 142 |
+
per_stamp = {}
|
| 143 |
+
offsets = [float(per_stamp.get(f"{i:02d}", 0.0)) for i in range(len(templates))]
|
| 144 |
+
else:
|
| 145 |
+
raise ValueError(f"unknown stamp pool: {name}")
|
| 146 |
+
_STAMP_POOLS[name] = (templates, offsets)
|
| 147 |
+
return _STAMP_POOLS[name]
|
| 148 |
+
|
| 149 |
+
|
| 150 |
+
ARROW_COLOR = "#2d2d2d"
|
| 151 |
+
START_COLOR = "#1f9d55"
|
| 152 |
+
TERMINUS_COLORS = [
|
| 153 |
+
"#c23030", "#1a6dba", "#c47a18", "#7a35a0",
|
| 154 |
+
"#2d8e2d", "#b83280", "#0f766e", "#b45309",
|
| 155 |
+
]
|
| 156 |
+
|
| 157 |
+
|
| 158 |
+
def _wrap(a: float) -> float:
|
| 159 |
+
return (a + math.pi) % (2 * math.pi) - math.pi
|
| 160 |
+
|
| 161 |
+
|
| 162 |
+
def _ray_circle_entry(
|
| 163 |
+
origin: np.ndarray,
|
| 164 |
+
direction_unit: np.ndarray,
|
| 165 |
+
center: np.ndarray,
|
| 166 |
+
radius: float,
|
| 167 |
+
) -> float | None:
|
| 168 |
+
"""Return the entry t along the ray into a circle, or None if it misses.
|
| 169 |
+
|
| 170 |
+
t is the signed distance along the (unit) ray direction at which the
|
| 171 |
+
ray first crosses the circle boundary. t < 0 or None → the circle is
|
| 172 |
+
not hit forward from the origin.
|
| 173 |
+
"""
|
| 174 |
+
to_c = center - origin
|
| 175 |
+
proj = float(to_c[0] * direction_unit[0] + to_c[1] * direction_unit[1])
|
| 176 |
+
to_c_sq = float(to_c[0] ** 2 + to_c[1] ** 2)
|
| 177 |
+
perp_sq = max(0.0, to_c_sq - proj * proj)
|
| 178 |
+
r_sq = radius * radius
|
| 179 |
+
if perp_sq > r_sq:
|
| 180 |
+
return None
|
| 181 |
+
offset = math.sqrt(r_sq - perp_sq)
|
| 182 |
+
t_entry = proj - offset
|
| 183 |
+
if t_entry < 0.0:
|
| 184 |
+
t_exit = proj + offset
|
| 185 |
+
if t_exit < 0.0:
|
| 186 |
+
return None
|
| 187 |
+
return 0.0 # origin is already inside the circle
|
| 188 |
+
return t_entry
|
| 189 |
+
|
| 190 |
+
|
| 191 |
+
def _ray_min_gap_to_circles(
|
| 192 |
+
origin: np.ndarray,
|
| 193 |
+
direction_angle: float,
|
| 194 |
+
t_end: float,
|
| 195 |
+
other_circles: List[Tuple[np.ndarray, float, object]],
|
| 196 |
+
) -> float:
|
| 197 |
+
"""Minimum gap between the ray segment [0, t_end] and each circle boundary.
|
| 198 |
+
|
| 199 |
+
For each circle, computes the closest distance from the segment to the
|
| 200 |
+
circle's center, subtracts the radius. Negative means the segment enters
|
| 201 |
+
the circle. Returns the minimum over all circles (most threatening near-
|
| 202 |
+
miss).
|
| 203 |
+
"""
|
| 204 |
+
dv = np.array([math.cos(direction_angle), math.sin(direction_angle)])
|
| 205 |
+
best = float("inf")
|
| 206 |
+
for center, radius, _tag in other_circles:
|
| 207 |
+
to_c = center - origin
|
| 208 |
+
proj = float(to_c[0] * dv[0] + to_c[1] * dv[1])
|
| 209 |
+
if proj < 0.0:
|
| 210 |
+
t_clamp = 0.0
|
| 211 |
+
elif proj > t_end:
|
| 212 |
+
t_clamp = t_end
|
| 213 |
+
else:
|
| 214 |
+
t_clamp = proj
|
| 215 |
+
closest = origin + t_clamp * dv
|
| 216 |
+
d = float(math.hypot(center[0] - closest[0], center[1] - closest[1]))
|
| 217 |
+
gap = d - radius
|
| 218 |
+
if gap < best:
|
| 219 |
+
best = gap
|
| 220 |
+
return best
|
| 221 |
+
|
| 222 |
+
|
| 223 |
+
def _first_circle_hit(
|
| 224 |
+
origin: np.ndarray,
|
| 225 |
+
direction_angle: float,
|
| 226 |
+
circles: List[Tuple[np.ndarray, float, object]],
|
| 227 |
+
exclude_tag: object | None = None,
|
| 228 |
+
) -> Tuple[float, object] | None:
|
| 229 |
+
"""Return (t_entry, tag) of the first circle entered along the ray."""
|
| 230 |
+
dv = np.array([math.cos(direction_angle), math.sin(direction_angle)])
|
| 231 |
+
best = None
|
| 232 |
+
for center, radius, tag in circles:
|
| 233 |
+
if tag == exclude_tag:
|
| 234 |
+
continue
|
| 235 |
+
t = _ray_circle_entry(origin, dv, center, radius)
|
| 236 |
+
if t is None:
|
| 237 |
+
continue
|
| 238 |
+
if best is None or t < best[0]:
|
| 239 |
+
best = (t, tag)
|
| 240 |
+
return best
|
| 241 |
+
|
| 242 |
+
|
| 243 |
+
def sample_instance(
|
| 244 |
+
rng: random.Random,
|
| 245 |
+
width: int,
|
| 246 |
+
height: int,
|
| 247 |
+
min_hops: int = 10,
|
| 248 |
+
max_hops: int = 13,
|
| 249 |
+
num_termini: int = 10,
|
| 250 |
+
num_decoys_target: int = 28,
|
| 251 |
+
arrow_radius: float = 22.0,
|
| 252 |
+
terminus_radius: float = 30.0,
|
| 253 |
+
step_length: float = 300.0,
|
| 254 |
+
step_jitter: float = 200.0,
|
| 255 |
+
max_attempts: int = 600,
|
| 256 |
+
) -> Dict | None:
|
| 257 |
+
"""Build a chain + decoys + termini; return a record dict or None."""
|
| 258 |
+
edge_margin = 110
|
| 259 |
+
min_center_gap = 2 * arrow_radius + 14 # min distance between arrow centres
|
| 260 |
+
term_gap = arrow_radius + terminus_radius + 12
|
| 261 |
+
term_term_gap = 2 * terminus_radius + 60
|
| 262 |
+
# Required clearance (px) between any ray segment [origin → correct next
|
| 263 |
+
# circle's boundary] and every OTHER circle on the canvas. This prevents
|
| 264 |
+
# wrong circles from sitting ambiguously close to a ray that isn't meant
|
| 265 |
+
# to hit them.
|
| 266 |
+
clearance_px = 25.0
|
| 267 |
+
|
| 268 |
+
for _ in range(max_attempts):
|
| 269 |
+
# ── 1. Place termini around the canvas (spaced apart) ──
|
| 270 |
+
termini_pts: List[np.ndarray] = []
|
| 271 |
+
t_attempts = 0
|
| 272 |
+
while len(termini_pts) < num_termini and t_attempts < 600:
|
| 273 |
+
t_attempts += 1
|
| 274 |
+
x = rng.uniform(edge_margin, width - edge_margin)
|
| 275 |
+
y = rng.uniform(edge_margin, height - edge_margin)
|
| 276 |
+
p = np.array([x, y])
|
| 277 |
+
if all(float(np.linalg.norm(p - q)) > term_term_gap for q in termini_pts):
|
| 278 |
+
termini_pts.append(p)
|
| 279 |
+
if len(termini_pts) < num_termini:
|
| 280 |
+
continue
|
| 281 |
+
|
| 282 |
+
num_hops = rng.randint(min_hops, max_hops)
|
| 283 |
+
|
| 284 |
+
# ── 2. Build chain of arrow (pos, direction) entries ──
|
| 285 |
+
start_pos = np.array([
|
| 286 |
+
rng.uniform(edge_margin + 40, width - edge_margin - 40),
|
| 287 |
+
rng.uniform(edge_margin + 40, height - edge_margin - 40),
|
| 288 |
+
])
|
| 289 |
+
start_dir = rng.uniform(0, 2 * math.pi)
|
| 290 |
+
chain: List[Tuple[np.ndarray, float]] = [(start_pos, start_dir)]
|
| 291 |
+
|
| 292 |
+
ok = True
|
| 293 |
+
move_heading = start_dir
|
| 294 |
+
for _hop in range(num_hops - 1):
|
| 295 |
+
cur_pos, _ = chain[-1]
|
| 296 |
+
|
| 297 |
+
cx_margin = min(cur_pos[0] - edge_margin, width - edge_margin - cur_pos[0])
|
| 298 |
+
cy_margin = min(cur_pos[1] - edge_margin, height - edge_margin - cur_pos[1])
|
| 299 |
+
bias_strength = 0.0
|
| 300 |
+
bias_dir = 0.0
|
| 301 |
+
if cx_margin < 220 or cy_margin < 220:
|
| 302 |
+
to_center = np.array([width / 2 - cur_pos[0], height / 2 - cur_pos[1]])
|
| 303 |
+
bias_dir = math.atan2(to_center[1], to_center[0])
|
| 304 |
+
bias_strength = max(0.0, 1.0 - min(cx_margin, cy_margin) / 220.0)
|
| 305 |
+
|
| 306 |
+
placed = False
|
| 307 |
+
for _retry in range(30):
|
| 308 |
+
turn = rng.uniform(-math.pi / 4, math.pi / 4)
|
| 309 |
+
candidate_dir = move_heading + turn
|
| 310 |
+
if bias_strength > 0:
|
| 311 |
+
cx = math.cos(candidate_dir) * (1 - 0.5 * bias_strength) + \
|
| 312 |
+
math.cos(bias_dir) * (0.5 * bias_strength)
|
| 313 |
+
cy = math.sin(candidate_dir) * (1 - 0.5 * bias_strength) + \
|
| 314 |
+
math.sin(bias_dir) * (0.5 * bias_strength)
|
| 315 |
+
candidate_dir = math.atan2(cy, cx)
|
| 316 |
+
dvec = np.array([math.cos(candidate_dir), math.sin(candidate_dir)])
|
| 317 |
+
pvec = np.array([-math.sin(candidate_dir), math.cos(candidate_dir)])
|
| 318 |
+
step_len = step_length + rng.uniform(-step_jitter, step_jitter)
|
| 319 |
+
perp = rng.uniform(-step_jitter, step_jitter)
|
| 320 |
+
next_pos = cur_pos + step_len * dvec + perp * pvec
|
| 321 |
+
if not (edge_margin < next_pos[0] < width - edge_margin and
|
| 322 |
+
edge_margin < next_pos[1] < height - edge_margin):
|
| 323 |
+
continue
|
| 324 |
+
if any(float(np.linalg.norm(next_pos - t)) < term_gap for t in termini_pts):
|
| 325 |
+
continue
|
| 326 |
+
if any(float(np.linalg.norm(next_pos - p)) < min_center_gap for p, _ in chain):
|
| 327 |
+
continue
|
| 328 |
+
|
| 329 |
+
actual_dir = math.atan2(
|
| 330 |
+
next_pos[1] - cur_pos[1], next_pos[0] - cur_pos[0],
|
| 331 |
+
)
|
| 332 |
+
chain[-1] = (cur_pos, actual_dir)
|
| 333 |
+
chain.append((next_pos, actual_dir))
|
| 334 |
+
move_heading = actual_dir
|
| 335 |
+
placed = True
|
| 336 |
+
break
|
| 337 |
+
if not placed:
|
| 338 |
+
ok = False
|
| 339 |
+
break
|
| 340 |
+
|
| 341 |
+
if not ok or len(chain) < num_hops:
|
| 342 |
+
continue
|
| 343 |
+
|
| 344 |
+
# ── 3. Pick a terminus the last arrow can point at without blockage ──
|
| 345 |
+
last_pos = chain[-1][0]
|
| 346 |
+
other_arrow_circles = [
|
| 347 |
+
(chain[j][0], arrow_radius, ("A", j)) for j in range(num_hops - 1)
|
| 348 |
+
]
|
| 349 |
+
terminus_circles = [
|
| 350 |
+
(termini_pts[k], terminus_radius, ("T", k)) for k in range(num_termini)
|
| 351 |
+
]
|
| 352 |
+
feasible = []
|
| 353 |
+
for k in range(num_termini):
|
| 354 |
+
to_t = termini_pts[k] - last_pos
|
| 355 |
+
dist = float(math.hypot(to_t[0], to_t[1]))
|
| 356 |
+
if dist < arrow_radius + terminus_radius + 10:
|
| 357 |
+
continue
|
| 358 |
+
last_dir_k = math.atan2(to_t[1], to_t[0])
|
| 359 |
+
hit = _first_circle_hit(
|
| 360 |
+
last_pos, last_dir_k,
|
| 361 |
+
other_arrow_circles + terminus_circles,
|
| 362 |
+
exclude_tag=("A", num_hops - 1),
|
| 363 |
+
)
|
| 364 |
+
if hit is not None and hit[1] == ("T", k):
|
| 365 |
+
feasible.append(k)
|
| 366 |
+
if not feasible:
|
| 367 |
+
continue
|
| 368 |
+
chosen_term_idx = rng.choice(feasible)
|
| 369 |
+
last_dir = math.atan2(
|
| 370 |
+
termini_pts[chosen_term_idx][1] - last_pos[1],
|
| 371 |
+
termini_pts[chosen_term_idx][0] - last_pos[0],
|
| 372 |
+
)
|
| 373 |
+
chain[-1] = (last_pos, last_dir)
|
| 374 |
+
|
| 375 |
+
# ── 4. Verify chain integrity under the first-hit ray rule ──
|
| 376 |
+
arrow_positions = [c[0] for c in chain]
|
| 377 |
+
all_arrow_circles = [
|
| 378 |
+
(arrow_positions[j], arrow_radius, ("A", j)) for j in range(num_hops)
|
| 379 |
+
]
|
| 380 |
+
chain_valid = True
|
| 381 |
+
for i in range(num_hops):
|
| 382 |
+
hit = _first_circle_hit(
|
| 383 |
+
arrow_positions[i], chain[i][1],
|
| 384 |
+
all_arrow_circles + terminus_circles,
|
| 385 |
+
exclude_tag=("A", i),
|
| 386 |
+
)
|
| 387 |
+
if hit is None:
|
| 388 |
+
chain_valid = False
|
| 389 |
+
break
|
| 390 |
+
expected = ("T", chosen_term_idx) if i == num_hops - 1 else ("A", i + 1)
|
| 391 |
+
if hit[1] != expected:
|
| 392 |
+
chain_valid = False
|
| 393 |
+
break
|
| 394 |
+
|
| 395 |
+
# Clearance check: every OTHER circle must stay ≥ clearance_px
|
| 396 |
+
# away from the ray segment [0, t_correct].
|
| 397 |
+
t_correct = hit[0]
|
| 398 |
+
other_circles = [
|
| 399 |
+
c for c in (all_arrow_circles + terminus_circles)
|
| 400 |
+
if c[2] not in (("A", i), expected)
|
| 401 |
+
]
|
| 402 |
+
if other_circles:
|
| 403 |
+
gap = _ray_min_gap_to_circles(
|
| 404 |
+
arrow_positions[i], chain[i][1], t_correct, other_circles,
|
| 405 |
+
)
|
| 406 |
+
if gap < clearance_px:
|
| 407 |
+
chain_valid = False
|
| 408 |
+
break
|
| 409 |
+
if not chain_valid:
|
| 410 |
+
continue
|
| 411 |
+
|
| 412 |
+
# ── 5. Add decoys that do not break any chain transition ──
|
| 413 |
+
decoys: List[Tuple[np.ndarray, float]] = []
|
| 414 |
+
add_attempts = 0
|
| 415 |
+
while len(decoys) < num_decoys_target and add_attempts < num_decoys_target * 30:
|
| 416 |
+
add_attempts += 1
|
| 417 |
+
dpos = np.array([
|
| 418 |
+
rng.uniform(edge_margin, width - edge_margin),
|
| 419 |
+
rng.uniform(edge_margin, height - edge_margin),
|
| 420 |
+
])
|
| 421 |
+
if any(float(np.linalg.norm(dpos - p)) < min_center_gap for p in arrow_positions):
|
| 422 |
+
continue
|
| 423 |
+
if any(float(np.linalg.norm(dpos - t)) < term_gap for t in termini_pts):
|
| 424 |
+
continue
|
| 425 |
+
if any(float(np.linalg.norm(dpos - p)) < min_center_gap for p, _ in decoys):
|
| 426 |
+
continue
|
| 427 |
+
|
| 428 |
+
# A decoy circle is safe iff, for every chain ray, the ray
|
| 429 |
+
# segment up to the correct next circle's entry stays at least
|
| 430 |
+
# `clearance_px` away from the decoy boundary.
|
| 431 |
+
broken = False
|
| 432 |
+
for i in range(num_hops):
|
| 433 |
+
origin = arrow_positions[i]
|
| 434 |
+
dir_angle = chain[i][1]
|
| 435 |
+
dv_unit = np.array([math.cos(dir_angle), math.sin(dir_angle)])
|
| 436 |
+
if i < num_hops - 1:
|
| 437 |
+
correct_c = arrow_positions[i + 1]
|
| 438 |
+
correct_r = arrow_radius
|
| 439 |
+
else:
|
| 440 |
+
correct_c = termini_pts[chosen_term_idx]
|
| 441 |
+
correct_r = terminus_radius
|
| 442 |
+
t_correct = _ray_circle_entry(origin, dv_unit, correct_c, correct_r)
|
| 443 |
+
if t_correct is None:
|
| 444 |
+
broken = True
|
| 445 |
+
break
|
| 446 |
+
gap = _ray_min_gap_to_circles(
|
| 447 |
+
origin, dir_angle, t_correct,
|
| 448 |
+
[(dpos, arrow_radius, ("D", len(decoys)))],
|
| 449 |
+
)
|
| 450 |
+
if gap < clearance_px:
|
| 451 |
+
broken = True
|
| 452 |
+
break
|
| 453 |
+
if broken:
|
| 454 |
+
continue
|
| 455 |
+
|
| 456 |
+
ddir = rng.uniform(0, 2 * math.pi)
|
| 457 |
+
decoys.append((dpos, ddir))
|
| 458 |
+
|
| 459 |
+
if len(decoys) < max(18, num_decoys_target - 8):
|
| 460 |
+
continue
|
| 461 |
+
|
| 462 |
+
# ── 6. Assemble record ──
|
| 463 |
+
terminus_labels = list(string.ascii_uppercase[:num_termini])
|
| 464 |
+
rng.shuffle(terminus_labels)
|
| 465 |
+
answer = terminus_labels[chosen_term_idx]
|
| 466 |
+
|
| 467 |
+
question = (
|
| 468 |
+
f"The image shows many small footprints scattered across the canvas, "
|
| 469 |
+
f"each footprint enclosed in its own circle, plus {num_termini} "
|
| 470 |
+
f"larger labeled terminus circles "
|
| 471 |
+
f"({', '.join(sorted(terminus_labels))}). The footprint inside the "
|
| 472 |
+
f"GREEN circle is the starting point. From that footprint, follow "
|
| 473 |
+
f"the direction its toes point: cast an infinitely thin ray (a "
|
| 474 |
+
f"mathematical half-line with zero width) from the footprint's "
|
| 475 |
+
f"circle centre along the toe-to-heel pointing direction, and the "
|
| 476 |
+
f"next step is the FIRST other circle this zero-width ray enters. A "
|
| 477 |
+
f"circle only counts if the ray actually crosses its boundary — "
|
| 478 |
+
f"grazing nearby without entering does not count. Continue until "
|
| 479 |
+
f"the ray enters a labeled terminus circle and report its label. "
|
| 480 |
+
f"Answer with a single letter. "
|
| 481 |
+
f"Provide your final answer enclosed in <answer>...</answer> tags."
|
| 482 |
+
)
|
| 483 |
+
|
| 484 |
+
return {
|
| 485 |
+
"width": width,
|
| 486 |
+
"height": height,
|
| 487 |
+
"num_hops": num_hops,
|
| 488 |
+
"num_decoys": len(decoys),
|
| 489 |
+
"arrow_radius": arrow_radius,
|
| 490 |
+
"terminus_radius": terminus_radius,
|
| 491 |
+
"chain": [{"x": float(p[0]), "y": float(p[1]), "dir": float(d)}
|
| 492 |
+
for p, d in chain],
|
| 493 |
+
"decoys": [{"x": float(p[0]), "y": float(p[1]), "dir": float(d)}
|
| 494 |
+
for p, d in decoys],
|
| 495 |
+
"termini": [{"x": float(t[0]), "y": float(t[1]),
|
| 496 |
+
"label": terminus_labels[i]}
|
| 497 |
+
for i, t in enumerate(termini_pts)],
|
| 498 |
+
"chosen_terminus_label": answer,
|
| 499 |
+
"question": question,
|
| 500 |
+
"answer": answer,
|
| 501 |
+
}
|
| 502 |
+
return None
|
| 503 |
+
|
| 504 |
+
|
| 505 |
+
# ── Rendering ────���─────────────────────────────────────────────────
|
| 506 |
+
|
| 507 |
+
def _draw_arrow_in_circle(
|
| 508 |
+
ax,
|
| 509 |
+
cx: float,
|
| 510 |
+
cy: float,
|
| 511 |
+
direction: float,
|
| 512 |
+
arrow_radius: float,
|
| 513 |
+
circle_color: str,
|
| 514 |
+
arrow_color: str,
|
| 515 |
+
zorder: float = 2.0,
|
| 516 |
+
circle_lw: float = 1.2,
|
| 517 |
+
arrow_lw: float = 1.8,
|
| 518 |
+
circle_fill: str = "none",
|
| 519 |
+
) -> None:
|
| 520 |
+
"""Stamp the foot template at (cx, cy), rotated so the toes point in
|
| 521 |
+
``direction`` (radians, screen convention: 0=right, π/2=down, -π/2=up).
|
| 522 |
+
|
| 523 |
+
The template's natural orientation has toes "up" (data direction -π/2),
|
| 524 |
+
so the rotation needed is direction + π/2 in display coords. scipy's
|
| 525 |
+
ndimage.rotate uses degrees, positive = counter-clockwise in array
|
| 526 |
+
coordinates (y-down). With imshow on inverted-y axes that visually
|
| 527 |
+
matches counter-clockwise on screen, so we negate.
|
| 528 |
+
"""
|
| 529 |
+
pool, offsets = _stamp_pool(_STAMP_POOL_NAME)
|
| 530 |
+
# Pick a stamp from the pool. Use a process-stable hash of (cx, cy) so the
|
| 531 |
+
# same cell always gets the same stamp across regenerations (within one
|
| 532 |
+
# difficulty/seed combo).
|
| 533 |
+
pick = int((cx * 91 + cy * 53)) % len(pool)
|
| 534 |
+
template = pool[pick]
|
| 535 |
+
rot_deg = -(math.degrees(direction) + 90.0) + offsets[pick]
|
| 536 |
+
rotated = _ndimage_rotate(
|
| 537 |
+
template, rot_deg, reshape=False, order=1, mode="constant", cval=1.0
|
| 538 |
+
)
|
| 539 |
+
# Match stamp visual radius to arrow_radius (the cell circle radius).
|
| 540 |
+
stamp_radius = arrow_radius
|
| 541 |
+
extent = (cx - stamp_radius, cx + stamp_radius,
|
| 542 |
+
cy + stamp_radius, cy - stamp_radius)
|
| 543 |
+
img_artist = ax.imshow(rotated, extent=extent, zorder=zorder + 0.1,
|
| 544 |
+
interpolation="bilinear")
|
| 545 |
+
clip = mpatches.Circle((cx, cy), radius=stamp_radius, transform=ax.transData)
|
| 546 |
+
img_artist.set_clip_path(clip)
|
| 547 |
+
|
| 548 |
+
# Cell outline: only for the foot pool (fish/key cells get no outline so
|
| 549 |
+
# the stamp is not visually cropped or boxed-in).
|
| 550 |
+
if _STAMP_POOL_NAME == "foot":
|
| 551 |
+
circle = mpatches.Circle(
|
| 552 |
+
(cx, cy), radius=arrow_radius,
|
| 553 |
+
facecolor="none", edgecolor=circle_color,
|
| 554 |
+
linewidth=circle_lw, zorder=zorder + 0.2,
|
| 555 |
+
)
|
| 556 |
+
ax.add_patch(circle)
|
| 557 |
+
elif circle_color != "#6b6b6b":
|
| 558 |
+
# Non-foot pool but a special cell (e.g. green start ring): still
|
| 559 |
+
# draw the outline so the start cell is identifiable.
|
| 560 |
+
circle = mpatches.Circle(
|
| 561 |
+
(cx, cy), radius=arrow_radius,
|
| 562 |
+
facecolor="none", edgecolor=circle_color,
|
| 563 |
+
linewidth=circle_lw, zorder=zorder + 0.2,
|
| 564 |
+
)
|
| 565 |
+
ax.add_patch(circle)
|
| 566 |
+
|
| 567 |
+
|
| 568 |
+
def render_instance(out_path: Path, record: Dict, noise_seed: int) -> None:
|
| 569 |
+
width = int(record["width"])
|
| 570 |
+
height = int(record["height"])
|
| 571 |
+
arrow_radius = float(record["arrow_radius"])
|
| 572 |
+
terminus_radius = float(record["terminus_radius"])
|
| 573 |
+
|
| 574 |
+
fig = plt.figure(figsize=(width / 100, height / 100), dpi=100)
|
| 575 |
+
ax = fig.add_axes([0, 0, 1, 1])
|
| 576 |
+
ax.set_xlim(0, width)
|
| 577 |
+
ax.set_ylim(height, 0)
|
| 578 |
+
ax.axis("off")
|
| 579 |
+
ax.set_facecolor("#f3efe8")
|
| 580 |
+
|
| 581 |
+
nrng = np.random.default_rng(noise_seed)
|
| 582 |
+
noise = nrng.normal(0.0, 1.0, size=(height, width))
|
| 583 |
+
noise = (noise - noise.min()) / max(noise.max() - noise.min(), 1e-6)
|
| 584 |
+
ax.imshow(noise, cmap="Greys", alpha=0.06, extent=(0, width, height, 0),
|
| 585 |
+
interpolation="bilinear")
|
| 586 |
+
|
| 587 |
+
for d in record["decoys"]:
|
| 588 |
+
_draw_arrow_in_circle(
|
| 589 |
+
ax, d["x"], d["y"], d["dir"], arrow_radius,
|
| 590 |
+
circle_color="#6b6b6b", arrow_color=ARROW_COLOR,
|
| 591 |
+
zorder=2.0, circle_lw=1.1, arrow_lw=1.7,
|
| 592 |
+
)
|
| 593 |
+
|
| 594 |
+
for i, c in enumerate(record["chain"]):
|
| 595 |
+
if i == 0:
|
| 596 |
+
# Start cell: foot stamp inside a thicker GREEN circle (no "S" label).
|
| 597 |
+
_draw_arrow_in_circle(
|
| 598 |
+
ax, c["x"], c["y"], c["dir"], arrow_radius,
|
| 599 |
+
circle_color=START_COLOR, arrow_color=START_COLOR,
|
| 600 |
+
zorder=3.6, circle_lw=2.6, arrow_lw=2.4,
|
| 601 |
+
)
|
| 602 |
+
else:
|
| 603 |
+
_draw_arrow_in_circle(
|
| 604 |
+
ax, c["x"], c["y"], c["dir"], arrow_radius,
|
| 605 |
+
circle_color="#6b6b6b", arrow_color=ARROW_COLOR,
|
| 606 |
+
zorder=2.3, circle_lw=1.1, arrow_lw=1.8,
|
| 607 |
+
)
|
| 608 |
+
|
| 609 |
+
for i, t in enumerate(record["termini"]):
|
| 610 |
+
color = TERMINUS_COLORS[i % len(TERMINUS_COLORS)]
|
| 611 |
+
circle = mpatches.Circle(
|
| 612 |
+
(t["x"], t["y"]), radius=terminus_radius,
|
| 613 |
+
facecolor=color, edgecolor="white", linewidth=2.0, zorder=5.0,
|
| 614 |
+
)
|
| 615 |
+
ax.add_patch(circle)
|
| 616 |
+
ax.text(t["x"], t["y"], t["label"],
|
| 617 |
+
fontsize=20, fontweight="bold", color="white",
|
| 618 |
+
ha="center", va="center", zorder=5.5)
|
| 619 |
+
|
| 620 |
+
fig.savefig(out_path, dpi=100, bbox_inches="tight", pad_inches=0)
|
| 621 |
+
plt.close(fig)
|
| 622 |
+
|
| 623 |
+
|
| 624 |
+
def main() -> None:
|
| 625 |
+
parser = argparse.ArgumentParser()
|
| 626 |
+
parser.add_argument("--output-root", type=Path, required=True)
|
| 627 |
+
parser.add_argument("--count", type=int, default=20)
|
| 628 |
+
parser.add_argument("--seed", type=int, default=42)
|
| 629 |
+
parser.add_argument("--width", type=int, default=512)
|
| 630 |
+
parser.add_argument("--height", type=int, default=512)
|
| 631 |
+
parser.add_argument("--difficulty", type=int, default=5,
|
| 632 |
+
help="Integer difficulty >=0; scales termini/hops/decoys.")
|
| 633 |
+
args = parser.parse_args()
|
| 634 |
+
|
| 635 |
+
def _canvas_scale(n_d, n_0):
|
| 636 |
+
import math
|
| 637 |
+
return math.sqrt(max(1.0, n_d / n_0))
|
| 638 |
+
d = max(0, int(args.difficulty))
|
| 639 |
+
N_d = 15 + 8 * d
|
| 640 |
+
N_0 = 15
|
| 641 |
+
s = _canvas_scale(N_d, N_0)
|
| 642 |
+
args.width = int(round(args.width * s))
|
| 643 |
+
args.height = int(round(args.height * s))
|
| 644 |
+
|
| 645 |
+
out_root = args.output_root
|
| 646 |
+
img_dir = out_root / "images"
|
| 647 |
+
img_dir.mkdir(parents=True, exist_ok=True)
|
| 648 |
+
|
| 649 |
+
rng = random.Random(args.seed)
|
| 650 |
+
records = []
|
| 651 |
+
|
| 652 |
+
_num_termini = min(12, 5 + d)
|
| 653 |
+
_min_hops = 5 + 2 * d
|
| 654 |
+
_max_hops = 5 + 2 * d + 2
|
| 655 |
+
_num_decoys_target = 10 + 6 * d
|
| 656 |
+
|
| 657 |
+
pbar = tqdm(range(args.count), desc="Generating", unit="img")
|
| 658 |
+
for idx in pbar:
|
| 659 |
+
record = sample_instance(
|
| 660 |
+
rng, args.width, args.height,
|
| 661 |
+
min_hops=_min_hops, max_hops=_max_hops,
|
| 662 |
+
num_termini=_num_termini,
|
| 663 |
+
num_decoys_target=_num_decoys_target,
|
| 664 |
+
step_length=200.0,
|
| 665 |
+
step_jitter=100.0, # ray step range = [100, 300]
|
| 666 |
+
arrow_radius=26.0,
|
| 667 |
+
terminus_radius=26.0,
|
| 668 |
+
)
|
| 669 |
+
if record is None:
|
| 670 |
+
pbar.set_postfix(status="FAILED")
|
| 671 |
+
continue
|
| 672 |
+
|
| 673 |
+
name = f"arrow_chain_{idx:05d}.png"
|
| 674 |
+
ns = rng.randint(0, 10 ** 9)
|
| 675 |
+
render_instance(img_dir / name, record, noise_seed=ns)
|
| 676 |
+
|
| 677 |
+
record["image"] = f"images/{name}"
|
| 678 |
+
records.append(record)
|
| 679 |
+
pbar.set_postfix(ok=len(records), hops=record["num_hops"],
|
| 680 |
+
decoys=record["num_decoys"], ans=record["answer"])
|
| 681 |
+
|
| 682 |
+
with (out_root / "annotations.jsonl").open("w") as fh:
|
| 683 |
+
for r in records:
|
| 684 |
+
fh.write(json.dumps(r) + "\n")
|
| 685 |
+
|
| 686 |
+
data_json = {
|
| 687 |
+
"task": "arrow_chain",
|
| 688 |
+
"category": "sequential_traversal",
|
| 689 |
+
"count": len(records),
|
| 690 |
+
"items": [
|
| 691 |
+
{"image": r["image"], "question": r["question"], "answer": r["answer"]}
|
| 692 |
+
for r in records
|
| 693 |
+
],
|
| 694 |
+
}
|
| 695 |
+
(out_root / "data.json").write_text(json.dumps(data_json, indent=2))
|
| 696 |
+
print(f"Saved {len(records)} items to {out_root}")
|
| 697 |
+
|
| 698 |
+
|
| 699 |
+
if __name__ == "__main__":
|
| 700 |
+
main()
|
code/sequential_traversal/arrow_chain/data.json
ADDED
|
@@ -0,0 +1,32 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"task": "arrow_chain",
|
| 3 |
+
"category": "sequential_traversal",
|
| 4 |
+
"count": 5,
|
| 5 |
+
"items": [
|
| 6 |
+
{
|
| 7 |
+
"image": "images/arrow_chain_00000.png",
|
| 8 |
+
"question": "The image shows many small pieces scattered across the canvas, each clearly oriented in a specific direction, plus several larger labeled terminus markers (each marker bears a distinct letter). The piece highlighted by a green ring is the starting point. From that piece, cast an infinitely thin ray (a mathematical half-line with zero width) from its centre along the direction it points; the next step is the FIRST other marker this ray enters. A marker only counts if the ray actually crosses its boundary \u2014 grazing nearby without entering does not count. Continue until the ray enters a labeled terminus marker and report its letter. Answer with a single letter. Provide your final answer enclosed in <answer>...</answer> tags.",
|
| 9 |
+
"answer": "H"
|
| 10 |
+
},
|
| 11 |
+
{
|
| 12 |
+
"image": "images/arrow_chain_00001.png",
|
| 13 |
+
"question": "The image shows many small pieces scattered across the canvas, each clearly oriented in a specific direction, plus several larger labeled terminus markers (each marker bears a distinct letter). The piece highlighted by a green ring is the starting point. From that piece, cast an infinitely thin ray (a mathematical half-line with zero width) from its centre along the direction it points; the next step is the FIRST other marker this ray enters. A marker only counts if the ray actually crosses its boundary \u2014 grazing nearby without entering does not count. Continue until the ray enters a labeled terminus marker and report its letter. Answer with a single letter. Provide your final answer enclosed in <answer>...</answer> tags.",
|
| 14 |
+
"answer": "B"
|
| 15 |
+
},
|
| 16 |
+
{
|
| 17 |
+
"image": "images/arrow_chain_00002.png",
|
| 18 |
+
"question": "The image shows many small pieces scattered across the canvas, each clearly oriented in a specific direction, plus several larger labeled terminus markers (each marker bears a distinct letter). The piece highlighted by a green ring is the starting point. From that piece, cast an infinitely thin ray (a mathematical half-line with zero width) from its centre along the direction it points; the next step is the FIRST other marker this ray enters. A marker only counts if the ray actually crosses its boundary \u2014 grazing nearby without entering does not count. Continue until the ray enters a labeled terminus marker and report its letter. Answer with a single letter. Provide your final answer enclosed in <answer>...</answer> tags.",
|
| 19 |
+
"answer": "B"
|
| 20 |
+
},
|
| 21 |
+
{
|
| 22 |
+
"image": "images/arrow_chain_00003.png",
|
| 23 |
+
"question": "The image shows many small pieces scattered across the canvas, each clearly oriented in a specific direction, plus several larger labeled terminus markers (each marker bears a distinct letter). The piece highlighted by a green ring is the starting point. From that piece, cast an infinitely thin ray (a mathematical half-line with zero width) from its centre along the direction it points; the next step is the FIRST other marker this ray enters. A marker only counts if the ray actually crosses its boundary \u2014 grazing nearby without entering does not count. Continue until the ray enters a labeled terminus marker and report its letter. Answer with a single letter. Provide your final answer enclosed in <answer>...</answer> tags.",
|
| 24 |
+
"answer": "J"
|
| 25 |
+
},
|
| 26 |
+
{
|
| 27 |
+
"image": "images/arrow_chain_00004.png",
|
| 28 |
+
"question": "The image shows many small pieces scattered across the canvas, each clearly oriented in a specific direction, plus several larger labeled terminus markers (each marker bears a distinct letter). The piece highlighted by a green ring is the starting point. From that piece, cast an infinitely thin ray (a mathematical half-line with zero width) from its centre along the direction it points; the next step is the FIRST other marker this ray enters. A marker only counts if the ray actually crosses its boundary \u2014 grazing nearby without entering does not count. Continue until the ray enters a labeled terminus marker and report its letter. Answer with a single letter. Provide your final answer enclosed in <answer>...</answer> tags.",
|
| 29 |
+
"answer": "C"
|
| 30 |
+
}
|
| 31 |
+
]
|
| 32 |
+
}
|
code/sequential_traversal/arrow_chain/fish_stamps/00.png
ADDED
|
Git LFS Details
|
code/sequential_traversal/arrow_chain/fish_stamps/01.png
ADDED
|
Git LFS Details
|
code/sequential_traversal/arrow_chain/fish_stamps/02.png
ADDED
|
Git LFS Details
|
code/sequential_traversal/arrow_chain/fish_stamps/03.png
ADDED
|
Git LFS Details
|
code/sequential_traversal/arrow_chain/fish_stamps/04.png
ADDED
|
Git LFS Details
|
code/sequential_traversal/arrow_chain/fish_stamps/05.png
ADDED
|
Git LFS Details
|
code/sequential_traversal/arrow_chain/fish_stamps/offsets.json
ADDED
|
@@ -0,0 +1,8 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"02": -1,
|
| 3 |
+
"01": -1,
|
| 4 |
+
"05": -3,
|
| 5 |
+
"00": 0,
|
| 6 |
+
"03": 1,
|
| 7 |
+
"04": -1
|
| 8 |
+
}
|
code/sequential_traversal/arrow_chain/fish_template_grid.png
ADDED
|
Git LFS Details
|
code/sequential_traversal/arrow_chain/key_stamps/00.png
ADDED
|
Git LFS Details
|
code/sequential_traversal/arrow_chain/key_stamps/01.png
ADDED
|
Git LFS Details
|
code/sequential_traversal/arrow_chain/key_stamps/02.png
ADDED
|
Git LFS Details
|
code/sequential_traversal/arrow_chain/key_stamps/03.png
ADDED
|
Git LFS Details
|
code/sequential_traversal/arrow_chain/key_stamps/04.png
ADDED
|
Git LFS Details
|
code/sequential_traversal/arrow_chain/key_stamps/05.png
ADDED
|
Git LFS Details
|
code/sequential_traversal/arrow_chain/key_stamps/offsets.json
ADDED
|
@@ -0,0 +1,6 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"02": 0,
|
| 3 |
+
"01": 0,
|
| 4 |
+
"00": 0,
|
| 5 |
+
"05": 0
|
| 6 |
+
}
|