Delete root files (moved to own-solver/)
Browse files- ARC-GEN-README.md +0 -79
- LEARNING.md +0 -237
- PyTorch -to- ONNX.md +0 -52
- README.md +0 -142
- SKILL.md +0 -222
- TODO.md +0 -188
- neurogolf_utils.py +0 -359
- own-solver/.moved +0 -1
- skill-creator.md +0 -485
ARC-GEN-README.md
DELETED
|
@@ -1,79 +0,0 @@
|
|
| 1 |
-
<p align="center">
|
| 2 |
-
<img src="misc/images/arc-gen-logo.jpg">
|
| 3 |
-
</p>
|
| 4 |
-
|
| 5 |
-
This repository contains the source code for *ARC-GEN*, a mimetic procedural benchmark generator for the Abstraction and Reasoning Corpus.
|
| 6 |
-
|
| 7 |
-
For a more in-depth description of this work, see the [corresponding paper on arxiv](https://arxiv.org/abs/2511.00162).
|
| 8 |
-
|
| 9 |
-
## News
|
| 10 |
-
|
| 11 |
-
* `2026-04-04`: ARC-GEN to be used as the official benchmark generator in the [2026 NeuroGolf Championship](https://www.kaggle.com/competitions/neurogolf-2026) featured at [IJCAI-ECAI 2026](https://2026.ijcai.org/).
|
| 12 |
-
* `2026-03-25`: ARC-GEN now supports 500 additional tasks from [ARC-AGI-2](https://arcprize.org/arc-agi/2).
|
| 13 |
-
* `2025-10-31`: An ARC-GEN overview is now available on [arxiv](https://arxiv.org/abs/2511.00162).
|
| 14 |
-
* `2025-07-31`: ARC-GEN to be used as the official benchmark generator in the [2025 Google Code Golf Championship](https://www.kaggle.com/competitions/google-code-golf-2025) featured at [NeurIPS 2025](https://neurips.cc/Conferences/2025).
|
| 15 |
-
* `2025-05-15`: The initial ARC-GEN repository committed to GitHub.
|
| 16 |
-
|
| 17 |
-
## Installation
|
| 18 |
-
|
| 19 |
-
```
|
| 20 |
-
$ git clone --recurse-submodules https://github.com/google/ARC-GEN.git && cd ARC-GEN
|
| 21 |
-
```
|
| 22 |
-
|
| 23 |
-
## Usage
|
| 24 |
-
|
| 25 |
-
For **benchmark generation**, use the `generate` command with two arguments: the task ID, and the desired number of example pairs.
|
| 26 |
-
|
| 27 |
-
```
|
| 28 |
-
$ python3 arc_gen.py generate 1e0a9b12 1000
|
| 29 |
-
[{'input': [[4, 0, 0, 0], [0, 0, 0, 0], [4, 0, 8, 0], [0, 3, 8, 0]], 'output': ...
|
| 30 |
-
```
|
| 31 |
-
|
| 32 |
-
For **validation** (i.e., to ensure that the ARC-GEN generators can collectively reproduce the original [ARC-AGI-1](https://github.com/fchollet/ARC-AGI) benchmark suite), use the `validate` command:
|
| 33 |
-
|
| 34 |
-
```
|
| 35 |
-
$ python3 arc_gen.py validate
|
| 36 |
-
A total of 400 generators passed.
|
| 37 |
-
A total of 0 generators failed.
|
| 38 |
-
```
|
| 39 |
-
|
| 40 |
-
For an example of customized **variations**, refer to [arc_gen_variations.py](https://github.com/google/ARC-GEN/blob/main/arc_gen_variations.py), which produces two variations on [Task #125](https://arcprize.org/play?task=543a7ed5):
|
| 41 |
-
|
| 42 |
-
```
|
| 43 |
-
generator, _ = task_list.task_list().get("543a7ed5")
|
| 44 |
-
examples = []
|
| 45 |
-
# Two examples of a "large" variation on Task #125.
|
| 46 |
-
examples.extend([generator(boxes=8, size=28) for _ in range(2)])
|
| 47 |
-
# Two examples of a "large + inverted" variation on Task #125.
|
| 48 |
-
common.set_colors([0, 1, 2, 6, 8, 5, 3, 7, 4, 9])
|
| 49 |
-
examples.extend([generator(boxes=8, size=28) for _ in range(2)])
|
| 50 |
-
```
|
| 51 |
-
|
| 52 |
-
## The ARC-GEN-100K Dataset
|
| 53 |
-
|
| 54 |
-
For those seeking a pre-generated dataset of sample pairs, the link below provides a static benchmark suite containing 100,000 examples produced by ARC-GEN (covering all four-hundred tasks):
|
| 55 |
-
|
| 56 |
-
<p align="center">
|
| 57 |
-
https://www.kaggle.com/datasets/arcgen100k/the-arc-gen-100k-dataset
|
| 58 |
-
<br><br>
|
| 59 |
-
<img src="misc/images/arc-gen-gallery-faded.png">
|
| 60 |
-
</p>
|
| 61 |
-
|
| 62 |
-
## How to Cite?
|
| 63 |
-
|
| 64 |
-
```
|
| 65 |
-
@misc{Moffitt2025,
|
| 66 |
-
title={{ARC-GEN: A Mimetic Procedural Benchmark Generator for the Abstraction and Reasoning Corpus}},
|
| 67 |
-
author={Michael D. Moffitt},
|
| 68 |
-
year={2025},
|
| 69 |
-
eprint={2511.00162},
|
| 70 |
-
archivePrefix={arXiv},
|
| 71 |
-
primaryClass={cs.AI},
|
| 72 |
-
url={https://arxiv.org/abs/2511.00162},
|
| 73 |
-
}
|
| 74 |
-
```
|
| 75 |
-
|
| 76 |
-
## Other Resouces
|
| 77 |
-
|
| 78 |
-
* [RE-ARC: Reverse-Engineering the Abstraction and Reasoning Corpus](https://github.com/michaelhodel/re-arc) by Michael Hodel
|
| 79 |
-
* [Bootstrapping ARC: Synthetic Problem Generation for ARC Visual Reasoning Tasks](https://github.com/xu3kev/BARC) by Wen-Ding Li and others
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
LEARNING.md
DELETED
|
@@ -1,237 +0,0 @@
|
|
| 1 |
-
# NeuroGolf Solver — Learning & History
|
| 2 |
-
|
| 3 |
-
> This file accumulates everything learned across sessions. Read it to avoid repeating mistakes and to understand what techniques work. Newest entries first within each section.
|
| 4 |
-
|
| 5 |
-
## Version History
|
| 6 |
-
|
| 7 |
-
| Version | Date | Tasks (arc-gen validated) | Est LB | Key Changes |
|
| 8 |
-
|---------|------|--------------------------|--------|-------------|
|
| 9 |
-
| **v5.2** | **2026-04-26** | **52 locally, REJECTED on Kaggle** | **~710 (local)** | gravity.py (Task 78), mode.py (Task 129), edge.py (0 matches). **Kaggle rejected submission — profiler/validation gap.** |
|
| 10 |
-
| v5.1 | 2026-04-26 | 49 | ~604 | Exp 3: PCA/SVD 0 PCR solves. Refactored conv.py composable primitives. |
|
| 11 |
-
| v5.0 | 2026-04-26 | 49 | ~604 | Refactored to 16-file package, opset 17 (IR 8), Slice-based flip/rotate, lstsq crash fix |
|
| 12 |
-
| v4.3 | 2026-04-25 | 50 | ~670 | Updated docs. NO code changes. |
|
| 13 |
-
| v4.2 | 2026-04-24 | 50 | ~670 | PyTorch learned conv. Needs GPU. |
|
| 14 |
-
| v4.1 | 2026-04-24 | 50 | ~670 | Color map Gather for permutations (+15 pts) |
|
| 15 |
-
| v4.0 | 2026-04-24 | 50 | ~656 | ARC-GEN validation, new analytical solvers, static profiler, submission.csv |
|
| 16 |
-
| v3 | 2026-04-24 | 307 (local) / ~40 (LB) | 501 | concat_enhanced, varshape_spatial_gather, conv_var_diff |
|
| 17 |
-
| v2 | prior | 294 (local) | unknown | Spatial_gather, variable-shape conv, diff-shape conv |
|
| 18 |
-
| v1 | prior | 128 | unknown | Conv solver only |
|
| 19 |
-
|
| 20 |
-
## Mistakes Log (DO NOT REPEAT)
|
| 21 |
-
|
| 22 |
-
### 2026-04-26: Agent replaced user's score_network (onnx_tool) with silent fallback — CAUSED KAGGLE REJECTION
|
| 23 |
-
|
| 24 |
-
- **What**: The v5 refactor created `profiler.py` with a `_static_profile()` fallback that runs when `onnx_tool` is not installed. The fallback is wrapped in a bare `except: pass`, so if `onnx_tool` fails on a model (dynamic shapes, unsupported ops, opset 17 issues), the code **silently** falls through to a crude static profiler that returns fake scores instead of surfacing the error.
|
| 25 |
-
- **Result**: User's v5.2 submission was **rejected by Kaggle**. The 49 previously-accepted tasks worked, but the 3 new models (gravity.py, edge.py, mode.py) likely failed `onnx_tool.loadmodel()` shape inference or profiling. The local static profiler returned numbers that looked valid, so the user had no warning before submitting.
|
| 26 |
-
- **Root cause**:
|
| 27 |
-
1. User originally coded `score_network` to call `neurogolf_utils.score_network()` directly — which uses `onnx_tool` and surfaces errors properly.
|
| 28 |
-
2. Agent's v5 refactor wrapped it in `try/except: pass` and added `_static_profile()` fallback.
|
| 29 |
-
3. `_static_profile()` only counts Conv MACs (misses ReduceSum, Where, MatMul, etc.), only counts initializer bytes, and does NOT verify static shapes or check `onnx_tool` compatibility.
|
| 30 |
-
4. The fallback **hides failures** — models that Kaggle's `score_network` would reject appear to score fine locally.
|
| 31 |
-
- **The official validation pipeline** (from `neurogolf_utils.py`):
|
| 32 |
-
1. `check_network(filename)` — file size ≤ 1.44MB
|
| 33 |
-
2. `onnxruntime.InferenceSession(filename)` — model loads
|
| 34 |
-
3. `verify_subset(session, examples)` — correct outputs on all splits
|
| 35 |
-
4. `score_network(filename)` — uses `onnx_tool.loadmodel()` → `g.shape_infer()` → `g.profile()` → checks `g.valid_profile`, banned ops (UPPERCASE), negative memory. Returns `(None, None, None)` if ANY of these fail → model is NOT READY for submission.
|
| 36 |
-
- **What the static profiler gets wrong**:
|
| 37 |
-
- Only counts Conv MACs — gravity model has Conv+ReduceSum+Where+Greater+And+Not per step, all uncounted
|
| 38 |
-
- Banned op check uses mixed-case `{'Loop', 'Scan', ...}` but Kaggle checks `op_type.upper()` against `["LOOP", "SCAN", ...]`
|
| 39 |
-
- No `onnx.checker.check_model()` call
|
| 40 |
-
- No static shape verification
|
| 41 |
-
- No `onnx_tool` compatibility check
|
| 42 |
-
- **Rule**: NEVER silently fall back to a weaker validator. If the official scoring tool fails on a model, that model MUST be treated as unsolved. Surface the error, don't hide it.
|
| 43 |
-
- **Rule**: NEVER change the user's validation pipeline without understanding what it does. The user's `score_network` call was correct — it used `onnx_tool` directly.
|
| 44 |
-
|
| 45 |
-
### Fix Plan (must be done before next submission):
|
| 46 |
-
|
| 47 |
-
1. **profiler.py**: Remove silent fallback. If `onnx_tool` is available, use it. If it returns `(None, None, None)`, the model is REJECTED (unsolved). If `onnx_tool` is not installed, print a loud WARNING that scores are approximate and may not match Kaggle.
|
| 48 |
-
|
| 49 |
-
2. **validators.py**: Add `check_network()` equivalent — file size check (already done), `onnx.checker.check_model()`, banned op scan (UPPERCASE comparison), static shape verification on all tensors.
|
| 50 |
-
|
| 51 |
-
3. **solver_registry.py**: After a model passes `validate()` (correct outputs), also run `score_network()` from profiler. If it returns `(None, None, None)` → treat model as failed, try next solver. This catches models that produce correct outputs but can't be scored by Kaggle.
|
| 52 |
-
|
| 53 |
-
4. **main.py**: `--strict_size` already stops on oversized files. Add `--strict_score` (default True) — stop if any solved model returns `(None, None, None)` from `score_network()`.
|
| 54 |
-
|
| 55 |
-
5. **Test on Kaggle notebook**: Before submitting, run `neurogolf_utils.verify_network()` on ALL solved models in a Kaggle notebook. This is the ONLY way to be sure — local testing without `onnx_tool` cannot catch all failure modes.
|
| 56 |
-
|
| 57 |
-
### 2026-04-26: Agent put entire 1400-line codebase into a single file, repeatedly overwrote user's code
|
| 58 |
-
|
| 59 |
-
- **What**: When implementing v5 opset 17 changes, agent uploaded the entire solver as a single `neurogolf_solver.py` file — three times. Each upload overwrote the user's `run_tasks`, `main`, and W&B code that the agent couldn't read (the read tool truncates at ~1000 lines).
|
| 60 |
-
- **Result**: User's W&B logging code was deleted. User's `run_tasks` function was deleted. User had to point agent to a specific commit (3f3d372) to recover.
|
| 61 |
-
- **Root cause**: (1) Agent couldn't read the tail of the file due to tool truncation, so it rewrote the entire file from scratch instead of making surgical edits. (2) Agent prioritized "getting it done" over preserving existing working code.
|
| 62 |
-
- **Rule**: NEVER rewrite an entire file when you can't read all of it. Make surgical edits. NEVER destroy code you can't see.
|
| 63 |
-
|
| 64 |
-
### 2026-04-26: lstsq SVD non-convergence crash on task 313
|
| 65 |
-
|
| 66 |
-
- **What**: `np.linalg.lstsq(P, T_oh, rcond=None)` raised `LinAlgError: SVD did not converge`.
|
| 67 |
-
- **Fix**: Wrapped lstsq in `try/except (LinAlgError, ValueError): return None` in all call sites.
|
| 68 |
-
- **Rule**: EVERY lstsq call must be guarded.
|
| 69 |
-
|
| 70 |
-
### 2026-04-26: ReduceSum axes attribute invalid in opset 17
|
| 71 |
-
|
| 72 |
-
- **What**: Code used axes as attribute instead of tensor input (opset 13+ requirement).
|
| 73 |
-
- **Fix**: Created `_build_reducesum()` helper with axes as int64 initializer tensor.
|
| 74 |
-
- **Rule**: Audit ALL operators for breaking API changes when changing opset.
|
| 75 |
-
|
| 76 |
-
### 2026-04-26: Fake excluded tasks {21, 55, 80, 184, 202, 366}
|
| 77 |
-
|
| 78 |
-
- **What**: Agent added 6 "excluded" tasks to constants.py. There are NO excluded tasks — all 400 count.
|
| 79 |
-
- **Fix**: `EXCLUDED_TASKS = set()`
|
| 80 |
-
- **Rule**: All 400 tasks must be attempted. Do not invent exclusions.
|
| 81 |
-
|
| 82 |
-
### 2026-04-26: est_lb inflated by adding unsolved×1.0
|
| 83 |
-
|
| 84 |
-
- **What**: `est_lb = total_score + unsolved_count * 1.0` double-counted unsolved task scores.
|
| 85 |
-
- **Fix**: Report only solved score. Unsolved tasks get 1.0 on Kaggle automatically.
|
| 86 |
-
- **Rule**: est_lb should reflect only what we earn from solved tasks.
|
| 87 |
-
|
| 88 |
-
### 2026-04-25: Agent wrote 1919 lines of v5 code WITHOUT running full 400-task arc-gen validation
|
| 89 |
-
- **Rule**: NEVER mark a feature as done until validated against full arc-gen.
|
| 90 |
-
|
| 91 |
-
### 2026-04-25: Agent created version-named file violating project convention
|
| 92 |
-
- **Rule**: No version numbers in filenames.
|
| 93 |
-
|
| 94 |
-
### 2026-04-25: Agent claimed LOOCV Ridge tuning would improve arc-gen without evidence
|
| 95 |
-
- **Rule**: Theory from papers is NOT proof. Test first.
|
| 96 |
-
|
| 97 |
-
### 2026-04-25: Agent misrepresented user's intent — BLENDING is NOT the strategy
|
| 98 |
-
- **Rule**: LEARNING.md reflects USER'S strategy.
|
| 99 |
-
|
| 100 |
-
### 2026-04-25: Composition detectors, channel reduction wrapper — untested dead code
|
| 101 |
-
- **Rule**: Only add a solver if it demonstrably solves ≥1 task.
|
| 102 |
-
|
| 103 |
-
### 2026-04-25: Agent delivered untested code and asked user to validate it
|
| 104 |
-
- **Rule**: VALIDATE FIRST, DELIVER SECOND.
|
| 105 |
-
|
| 106 |
-
### 2026-04-24: PyTorch 2-layer conv — fits training but doesn't generalize to arc-gen
|
| 107 |
-
### 2026-04-24: Arc-gen in lstsq fitting exposes overfitting
|
| 108 |
-
### 2026-04-24: CuPy/GPU for lstsq — DOES NOT HELP
|
| 109 |
-
### 2026-04-24: Channel Gather for non-permutation color maps — WRONG OUTPUT
|
| 110 |
-
### 2026-04-24: ARC-GEN not loaded — THE #1 SCORE KILLER (v3→v4 fix)
|
| 111 |
-
### 2026-04-24: s_flip used GatherElements — OPSET 11 BUG
|
| 112 |
-
### 2026-04-24: score_network fallback returned (0,0,0)
|
| 113 |
-
### 2026-04-24: Ignored EXCLUDED tasks
|
| 114 |
-
|
| 115 |
-
## Competitive Intelligence
|
| 116 |
-
|
| 117 |
-
### What Others Do (For Awareness Only — We Do NOT Blend)
|
| 118 |
-
|
| 119 |
-
#### Why top notebooks score 4000+ and we score ~670
|
| 120 |
-
|
| 121 |
-
Top notebooks are **BLENDERS** — they assemble pre-solved ONNX models from public sources.
|
| 122 |
-
|
| 123 |
-
**Our strategy**: Build our own solver. No blending. No public datasets.
|
| 124 |
-
|
| 125 |
-
#### The 6 Key Techniques They Have That We Lack
|
| 126 |
-
|
| 127 |
-
1. **Opset 17** — ✅ DONE in v5. Slice+Transpose for near-zero cost transforms.
|
| 128 |
-
2. **Channel Reduction Wrapper** — 🔲 Not yet. Conv1x1(10→N) → transform → Conv1x1(N→10).
|
| 129 |
-
3. **Composition Detectors** — 🔲 Not yet. Need to scan 400 tasks to find actual instances first.
|
| 130 |
-
4. **Best-of-N Model Selection** — 🔲 Not yet. Generate 20+ candidates, keep cheapest valid.
|
| 131 |
-
5. **ONNX Optimizer Pass** — 🔲 Not yet. onnxoptimizer.optimize() for dead-code elimination.
|
| 132 |
-
6. **LLM Rescue** — 🔲 Not yet. Per-task ONNX graphs for algorithmic tasks (gravity, outline, etc.)
|
| 133 |
-
|
| 134 |
-
## Deep Research Findings
|
| 135 |
-
|
| 136 |
-
### Exp 3: PCA/Truncated SVD Before lstsq — FULL RESULTS (2026-04-26)
|
| 137 |
-
|
| 138 |
-
**Implementation:** Refactored conv.py into composable primitives:
|
| 139 |
-
- `_build_patch_matrix(exs, ks, bias, full_30)` → P, T, T_oh
|
| 140 |
-
- `_solve_weights(P, T, T_oh)` → WT via raw lstsq
|
| 141 |
-
- `_solve_weights_pcr(P, T, T_oh, thresholds)` → WT via PCA regression
|
| 142 |
-
- `_extract_weights(WT, ks, bias)` → Wconv, B for ONNX
|
| 143 |
-
|
| 144 |
-
**Full 400-task run:** 0 PCR solves, 0 regressions.
|
| 145 |
-
|
| 146 |
-
**Conclusion:** Architecture mismatch, not regularization. Regularization experiments exhausted.
|
| 147 |
-
|
| 148 |
-
### ONNX Opset 17 Migration Notes (2026-04-26)
|
| 149 |
-
|
| 150 |
-
**Breaking changes from opset 10:**
|
| 151 |
-
| Operator | Opset 10 | Opset 13+ (incl. 17) |
|
| 152 |
-
|----------|----------|----------------------|
|
| 153 |
-
| ReduceSum | axes as **attribute** | axes as **tensor input** |
|
| 154 |
-
| ReduceMean | axes as **attribute** | axes as **tensor input** |
|
| 155 |
-
| Pad | pads as **attribute** | pads as **tensor input** (since opset 11) |
|
| 156 |
-
| Slice | no steps input | **steps** added as 5th tensor input |
|
| 157 |
-
|
| 158 |
-
### Official Scoring Pipeline (from neurogolf_utils.py) — READ BEFORE CODING
|
| 159 |
-
|
| 160 |
-
```python
|
| 161 |
-
# This is what Kaggle runs. Our validator MUST match this.
|
| 162 |
-
def check_network(filename):
|
| 163 |
-
# 1. File must exist
|
| 164 |
-
# 2. File size ≤ 1.44MB (1.44 * 1024 * 1024 bytes)
|
| 165 |
-
|
| 166 |
-
def score_network(filename):
|
| 167 |
-
# Uses onnx_tool.loadmodel() → shape_infer() → profile()
|
| 168 |
-
# Checks: g.valid_profile (static shapes required)
|
| 169 |
-
# Checks: op_type.upper() not in ["LOOP","SCAN","NONZERO","UNIQUE","SCRIPT","FUNCTION"]
|
| 170 |
-
# Checks: g.nodemap[key].memory >= 0
|
| 171 |
-
# Returns (macs, memory, params) or (None, None, None) on ANY failure
|
| 172 |
-
# (None, None, None) = "Your network performance could not be measured" = REJECTED
|
| 173 |
-
|
| 174 |
-
def verify_network(network, task_num, examples):
|
| 175 |
-
# 1. onnx.save → check_network (size)
|
| 176 |
-
# 2. InferenceSession (loads ok?)
|
| 177 |
-
# 3. verify_subset on train+test (correct outputs?)
|
| 178 |
-
# 4. verify_subset on arc-gen (correct outputs?)
|
| 179 |
-
# 5. score_network (scoreable by onnx_tool?)
|
| 180 |
-
# ALL must pass for "IS READY for submission"
|
| 181 |
-
```
|
| 182 |
-
|
| 183 |
-
## What Has NOT Worked
|
| 184 |
-
|
| 185 |
-
| Technique | Result | Why |
|
| 186 |
-
|-----------|--------|-----|
|
| 187 |
-
| **PCA/Truncated SVD (Exp 3)** | **0/400 PCR solves** | **Signal in noise dims; unsolved tasks = architecture mismatch** |
|
| 188 |
-
| **Silent profiler fallback** | **Kaggle rejection** | **Hides onnx_tool failures, returns fake scores** |
|
| 189 |
-
| Ridge/LOOCV λ | Fails arc-gen | Catastrophic, not benign overfitting |
|
| 190 |
-
| Skip ks=5,7,9 (Exp 1) | Hurts 2 tasks | Some tasks genuinely need interpolation-regime ks |
|
| 191 |
-
| CuPy GPU lstsq | OOM + same speed | O(n³) SVD bottleneck |
|
| 192 |
-
| PyTorch 2-layer (no arc-gen) | 0/30 arc-gen pass | Memorizes training |
|
| 193 |
-
|
| 194 |
-
## Technical Notes
|
| 195 |
-
|
| 196 |
-
### ARC-AGI Task Statistics
|
| 197 |
-
- 400 tasks total. NO excluded tasks — all 400 count.
|
| 198 |
-
- ~25 analytical tasks, ~25 conv tasks survive arc-gen, ~350 unsolved
|
| 199 |
-
|
| 200 |
-
### Score Calculation (official, from neurogolf_utils.py)
|
| 201 |
-
```python
|
| 202 |
-
# Uses onnx_tool for exact MACs/memory/params — NOT our static profiler
|
| 203 |
-
macs, memory, params = score_network(filename) # onnx_tool based
|
| 204 |
-
points = max(1.0, 25.0 - math.log(macs + memory + params))
|
| 205 |
-
```
|
| 206 |
-
|
| 207 |
-
## Session Notes for Future Agents
|
| 208 |
-
|
| 209 |
-
**Before touching code:**
|
| 210 |
-
1. Read this file (LEARNING.md) — all the way through
|
| 211 |
-
2. Read SKILL.md — especially "Development Methodology" and "Submission Checklist"
|
| 212 |
-
3. Read TODO.md — check experiment log and research queue
|
| 213 |
-
4. Run the current solver on 20-50 tasks to establish baseline
|
| 214 |
-
5. Only then: design experiment, implement, validate, compare
|
| 215 |
-
|
| 216 |
-
**Code structure (v5.2):**
|
| 217 |
-
- The solver is a Python package at `neurogolf_solver/`
|
| 218 |
-
- Run with `python -m neurogolf_solver.main [args]`
|
| 219 |
-
- Solvers in separate files: `analytical.py`, `geometric.py`, `tiling.py`, `conv.py`, `gravity.py`, `edge.py`, `mode.py`
|
| 220 |
-
- Edit individual files surgically — NEVER rewrite the whole package
|
| 221 |
-
- The legacy `neurogolf_solver.py` at root is v4, kept for reference — do NOT edit it
|
| 222 |
-
|
| 223 |
-
**CRITICAL: Scoring & Validation:**
|
| 224 |
-
- The ONLY reliable scoring is `neurogolf_utils.score_network()` which uses `onnx_tool`
|
| 225 |
-
- `profiler.py`'s `_static_profile()` is a fallback that DOES NOT match Kaggle scoring
|
| 226 |
-
- Before submitting: run `neurogolf_utils.verify_network()` on ALL solved models in a Kaggle notebook
|
| 227 |
-
- If `score_network` returns `(None, None, None)`, the model is REJECTED — do not submit it
|
| 228 |
-
|
| 229 |
-
**Before claiming a feature works:**
|
| 230 |
-
- Must pass arc-gen on ≥20 tasks (or full 400 if cheap)
|
| 231 |
-
- Must pass `neurogolf_utils.verify_network()` — not just our own validate()
|
| 232 |
-
- Must include A/B comparison
|
| 233 |
-
|
| 234 |
-
**Before uploading code:**
|
| 235 |
-
- Must have run full 400-task arc-gen validation
|
| 236 |
-
- Must confirm total score ≥ previous best
|
| 237 |
-
- NEVER change the scoring/validation pipeline without understanding what it does
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
PyTorch -to- ONNX.md
DELETED
|
@@ -1,52 +0,0 @@
|
|
| 1 |
-
Yes, absolutely viable — and it's probably the right move for the next batch of tasks. Here's the key tradeoff:
|
| 2 |
-
|
| 3 |
-
Why it works
|
| 4 |
-
PyTorch → ONNX export is straightforward for the ops we need:
|
| 5 |
-
|
| 6 |
-
torch.onnx.export(model, dummy_input, "task.onnx", opset_version=10)
|
| 7 |
-
This lets us use PyTorch's optimizer (Adam, SGD) to train small networks per-task via gradient descent instead of lstsq. That's strictly more powerful:
|
| 8 |
-
|
| 9 |
-
lstsq (current) PyTorch + gradient descent
|
| 10 |
-
Architecture Single conv layer only Multi-layer, nonlinear (ReLU, etc.)
|
| 11 |
-
Fitting Closed-form, one-shot Iterative, can fit complex patterns
|
| 12 |
-
Failure mode Underfits if pattern isn't linear Can overfit (which is GOOD here — we WANT to memorize the training examples)
|
| 13 |
-
Speed Fast for small kernels Slower, but GPU helps
|
| 14 |
-
The 94 unsolved tasks fail because a single conv layer can't express the transformation. A 2-3 layer network with ReLU activations could.
|
| 15 |
-
|
| 16 |
-
Why it's not trivial
|
| 17 |
-
Opset 10 restrictions. Not every PyTorch op exports cleanly to opset 10. You need to verify the exported model actually runs. The main gotchas:
|
| 18 |
-
|
| 19 |
-
torch.nn.Conv2d → Conv ✅ (opset 1)
|
| 20 |
-
torch.relu → Relu ✅ (opset 1)
|
| 21 |
-
torch.argmax → ArgMax ✅ (opset 1)
|
| 22 |
-
torch.gather → Gather ✅ (opset 1)
|
| 23 |
-
BUT: F.one_hot → OneHot ❌ (opset 9, but has CUDA issues — we already work around this with Equal+Cast)
|
| 24 |
-
Dynamic shapes / control flow → ❌ forbidden
|
| 25 |
-
So the pattern would be: build a small static PyTorch network, train it to perfectly fit the task's train+test examples, then export.
|
| 26 |
-
The network must have fixed tensor shapes throughout (no data-dependent branching).
|
| 27 |
-
|
| 28 |
-
What architecture to try
|
| 29 |
-
For the 94 unsolved tasks, a 2-layer conv net would be the first thing:
|
| 30 |
-
|
| 31 |
-
class TinyARC(nn.Module):
|
| 32 |
-
def __init__(self, hidden=32, ks=5):
|
| 33 |
-
super().__init__()
|
| 34 |
-
self.conv1 = nn.Conv2d(10, hidden, ks, padding=ks//2)
|
| 35 |
-
self.conv2 = nn.Conv2d(hidden, 10, ks, padding=ks//2)
|
| 36 |
-
|
| 37 |
-
def forward(self, x): # x: [1,10,30,30]
|
| 38 |
-
mask = x.sum(dim=1, keepdim=True) # [1,1,30,30] - valid pixel mask
|
| 39 |
-
h = torch.relu(self.conv1(x))
|
| 40 |
-
logits = self.conv2(h) # [1,10,30,30]
|
| 41 |
-
out = (logits.argmax(dim=1, keepdim=True) == torch.arange(10).reshape(1,10,1,1).to(x.device)).float()
|
| 42 |
-
return out * mask
|
| 43 |
-
Train with cross-entropy on the one-hot targets, export to ONNX. The argmax+equal+cast trick we already use would work in the export.
|
| 44 |
-
|
| 45 |
-
The real blocker isn't PyTorch→ONNX
|
| 46 |
-
It's the variable output shapes. 60+ of the 94 unsolved tasks have outputs that are a different size than the input, and that size varies across examples.
|
| 47 |
-
No static ONNX graph can produce a 5×3 output for one input and a 7×9 output for another — the graph is fixed.
|
| 48 |
-
|
| 49 |
-
For those, the only hope is that the conv learns to put valid content in the right region and zeros elsewhere
|
| 50 |
-
(which our mask = ReduceSum(input) trick handles when output ⊆ input bounds). But when output is LARGER than input, we're stuck without a way to derive the output mask.
|
| 51 |
-
|
| 52 |
-
Bottom line: Yes, build in PyTorch, export to ONNX. Focus the PyTorch solver on the ~10 same-shape tasks and ~7 fixed-diff-shape tasks where lstsq conv failed but a deeper network might succeed. The 77 variable-diff-shape tasks are a harder structural problem regardless of framework.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
README.md
DELETED
|
@@ -1,142 +0,0 @@
|
|
| 1 |
-
# NeuroGolf Solver v5
|
| 2 |
-
|
| 3 |
-
Builds minimal ONNX networks for ARC-AGI tasks. Modular Python package with opset 17, zero-cost Slice-based transforms.
|
| 4 |
-
|
| 5 |
-
**Currently running on Kaggle — results pending.**
|
| 6 |
-
|
| 7 |
-
## Version History
|
| 8 |
-
|
| 9 |
-
| Version | Date | Solved (local) | Arc-gen Validated | Est LB | Key Changes |
|
| 10 |
-
|---------|------|----------------|-------------------|--------|-------------|
|
| 11 |
-
| **v5** | **2026-04-26** | **TBD** | **TBD** | **TBD** | Refactored to package, opset 17, Slice-based flip/rotate (0 MACs), lstsq crash fix, tensor-based Pad & ReduceSum |
|
| 12 |
-
| v4.3 | 2026-04-25 | 307 | 50 | ~670 | Methodology docs, no code changes |
|
| 13 |
-
| v4.0 | 2026-04-24 | 307 | 50 | ~656 | ARC-GEN validation, static profiler |
|
| 14 |
-
| v3 | 2026-04-24 | 307 | ~40 | 501 | concat_enhanced, varshape_spatial_gather |
|
| 15 |
-
| v2 | prior | 294 | — | — | Spatial_gather, variable-shape conv |
|
| 16 |
-
| v1 | prior | 128 | — | — | Conv solver only |
|
| 17 |
-
|
| 18 |
-
## Project Structure
|
| 19 |
-
|
| 20 |
-
```
|
| 21 |
-
neurogolf_solver/ # Python package (v5)
|
| 22 |
-
├── __init__.py # Package marker
|
| 23 |
-
├── config.py # Runtime config (providers, opset)
|
| 24 |
-
├── constants.py # All constants (grid dims, excluded tasks, limits)
|
| 25 |
-
├── data_loader.py # Task loading, one-hot encoding, example extraction
|
| 26 |
-
├── gather_helpers.py # Gather-based ONNX model builders
|
| 27 |
-
├── main.py # Entry point with W&B init
|
| 28 |
-
├── onnx_helpers.py # Opset 17 builders (Slice, Pad, ReduceSum, mk)
|
| 29 |
-
├── profiler.py # Static cost profiler (fallback for onnx_tool)
|
| 30 |
-
├── submission.py # run_tasks with W&B logging, zip/csv generation
|
| 31 |
-
├── validators.py # Model validation against train+test+arc-gen
|
| 32 |
-
└── solvers/
|
| 33 |
-
├── __init__.py # Exports solve_task, ANALYTICAL_SOLVERS
|
| 34 |
-
├── analytical.py # identity, constant, color_map, transpose
|
| 35 |
-
├── conv.py # lstsq conv solvers (fixed, variable, diffshape, var_diff)
|
| 36 |
-
├── geometric.py # flip, rotate, shift, crop, gravity
|
| 37 |
-
├── solver_registry.py # Solver ordering + solve_task orchestration
|
| 38 |
-
└── tiling.py # tile, upscale, mirror, concat, spatial_gather
|
| 39 |
-
|
| 40 |
-
neurogolf_solver.py # Legacy monolith (v4, kept for reference)
|
| 41 |
-
neurogolf_utils.py # Official Kaggle scoring library
|
| 42 |
-
ARC-GEN-100K.zip # 400 files × ~250 examples synthetic data
|
| 43 |
-
neurogolf-2026-solver-notebooks.zip # 5 reference notebooks (LB 4000+)
|
| 44 |
-
```
|
| 45 |
-
|
| 46 |
-
## Quick Start
|
| 47 |
-
|
| 48 |
-
```bash
|
| 49 |
-
# Clone
|
| 50 |
-
git clone https://huggingface.co/rogermt/neurogolf-solver
|
| 51 |
-
cd neurogolf-solver
|
| 52 |
-
|
| 53 |
-
# Install deps
|
| 54 |
-
pip install numpy onnx onnxruntime
|
| 55 |
-
|
| 56 |
-
# Get ARC data
|
| 57 |
-
git clone --depth 1 https://github.com/fchollet/ARC-AGI.git
|
| 58 |
-
|
| 59 |
-
# Run (local)
|
| 60 |
-
python -m neurogolf_solver.main --data_dir ARC-AGI/data/training/ --output_dir submission --conv_budget 30
|
| 61 |
-
|
| 62 |
-
# Run (Kaggle)
|
| 63 |
-
python -m neurogolf_solver.main --kaggle --data_dir /kaggle/input/competitions/neurogolf-2026/ --output_dir /kaggle/working/submission --conv_budget 60
|
| 64 |
-
|
| 65 |
-
# With ARC-GEN data and W&B logging
|
| 66 |
-
python -m neurogolf_solver.main --data_dir ARC-AGI/data/training/ --arcgen_dir ARC-GEN-100K/ --output_dir submission --use_wandb
|
| 67 |
-
```
|
| 68 |
-
|
| 69 |
-
## Parameters
|
| 70 |
-
|
| 71 |
-
| Flag | Default | Description |
|
| 72 |
-
|------|---------|-------------|
|
| 73 |
-
| `--data_dir` | `ARC-AGI/data/training/` | Path to task JSONs |
|
| 74 |
-
| `--arcgen_dir` | `` | Path to ARC-GEN-100K/ directory |
|
| 75 |
-
| `--output_dir` | `/kaggle/working/submission` | Where to save .onnx files |
|
| 76 |
-
| `--kaggle` | off | Use Kaggle task format (task001.json with embedded arc-gen) |
|
| 77 |
-
| `--conv_budget` | `30` | Seconds per task for conv solver |
|
| 78 |
-
| `--tasks` | all | Comma-separated task numbers (e.g., `1,2,3`) |
|
| 79 |
-
| `--device` | `auto` | `auto`, `cpu`, or `cuda` |
|
| 80 |
-
| `--use_wandb` | off | Enable W&B logging |
|
| 81 |
-
|
| 82 |
-
## How It Works
|
| 83 |
-
|
| 84 |
-
**Format:** Input/output = `[1, 10, 30, 30]` one-hot float32. ONNX opset 17, IR version 8.
|
| 85 |
-
|
| 86 |
-
**Solver pipeline (in order):**
|
| 87 |
-
1. **Analytical solvers** (instant, near-zero cost):
|
| 88 |
-
identity → constant → color_map → transpose → flip → rotate → shift → tile → upscale → kronecker → nonuniform_scale → mirror_h → mirror_v → quad_mirror → concat → concat_enhanced → diagonal_tile → fixed_crop → spatial_gather → varshape_spatial_gather
|
| 89 |
-
|
| 90 |
-
2. **Conv solvers** (learned via least-squares, validated against arc-gen):
|
| 91 |
-
- `conv_fixed` — Slice → Conv → ArgMax → Equal+Cast → Pad
|
| 92 |
-
- `conv_variable` — Conv(30×30) → ArgMax → Equal+Cast → Mul(mask)
|
| 93 |
-
- `conv_diffshape` — Slice → Conv → Slice(crop) → ArgMax → Equal+Cast → Pad
|
| 94 |
-
- `conv_var_diff` — Conv(30×30) → ArgMax → Equal+Cast → Mul(input_mask)
|
| 95 |
-
|
| 96 |
-
## v5 Changes from v4
|
| 97 |
-
|
| 98 |
-
| Change | Impact |
|
| 99 |
-
|--------|--------|
|
| 100 |
-
| **Opset 10 → 17, IR 10 → 8** | Enables Slice(step=-1) for zero-cost transforms |
|
| 101 |
-
| **s_flip: Slice(step=-1)** | 0 MACs (was ~165K with Gather) |
|
| 102 |
-
| **s_rotate k=2: double Slice reverse** | 0 MACs (was ~165K) |
|
| 103 |
-
| **s_rotate k=1,3 (square): Slice+Transpose** | 0 MACs (was ~165K) |
|
| 104 |
-
| **All Pad nodes: tensor-based pads input** | Required for opset 17 compatibility |
|
| 105 |
-
| **All ReduceSum nodes: axes as tensor input** | Required for opset 13+ compatibility |
|
| 106 |
-
| **lstsq crash fix: try/except LinAlgError** | Prevents SVD non-convergence crash (task 313) |
|
| 107 |
-
| **Refactored to 16-file package** | Maintainable, testable, no more monolith |
|
| 108 |
-
|
| 109 |
-
## Scoring
|
| 110 |
-
|
| 111 |
-
```
|
| 112 |
-
Score per task = max(1.0, 25.0 - ln(MACs + memory_bytes + params))
|
| 113 |
-
```
|
| 114 |
-
|
| 115 |
-
- Analytical solvers (Slice/Transpose/Gather) → near-zero cost → ~20-25 pts
|
| 116 |
-
- Conv solvers → cost proportional to kernel size → ~7-14 pts
|
| 117 |
-
- Unsolved → 1.0 pt minimum
|
| 118 |
-
|
| 119 |
-
## Competition Rules
|
| 120 |
-
|
| 121 |
-
| Item | Value |
|
| 122 |
-
|------|-------|
|
| 123 |
-
| Input/Output | float32 `[1,10,30,30]` one-hot |
|
| 124 |
-
| Opset | 10 or 17 (both accepted on Kaggle) |
|
| 125 |
-
| Max file size | 1.44 MB per model |
|
| 126 |
-
| Banned ops | Loop, Scan, NonZero, Unique, Script, Function |
|
| 127 |
-
| Excluded tasks | {21, 55, 80, 184, 202, 366} |
|
| 128 |
-
| Validation | Models checked against train + test + arc-gen (ALL splits) |
|
| 129 |
-
|
| 130 |
-
## Key Docs
|
| 131 |
-
|
| 132 |
-
- **SKILL.md** — Competition rules, architecture, methodology, checklist
|
| 133 |
-
- **LEARNING.md** — All mistakes, research findings, what works/doesn't
|
| 134 |
-
- **TODO.md** — Roadmap, experiment queue, status tracking
|
| 135 |
-
|
| 136 |
-
## Strategy
|
| 137 |
-
|
| 138 |
-
We build our own solver. No blending. No public datasets. See LEARNING.md for competitive intelligence on what others do (for awareness only).
|
| 139 |
-
|
| 140 |
-
## Repo
|
| 141 |
-
|
| 142 |
-
https://huggingface.co/rogermt/neurogolf-solver
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
SKILL.md
DELETED
|
@@ -1,222 +0,0 @@
|
|
| 1 |
-
---
|
| 2 |
-
name: neurogolf-solver
|
| 3 |
-
description: Build and improve an ONNX model generator for the NeuroGolf Championship (Kaggle). Produces 400 tiny ONNX models (opset 17, IR 8, input/output [1,10,30,30] one-hot float32) for ARC-AGI tasks. Scoring = max(1, 25 - ln(MACs + memory_bytes + params)). Lower cost = higher score. Use this skill whenever working on this competition, debugging submission failures, or starting a fresh session.
|
| 4 |
-
---
|
| 5 |
-
|
| 6 |
-
# NeuroGolf Solver
|
| 7 |
-
|
| 8 |
-
## Development Methodology: The Closed-Loop
|
| 9 |
-
|
| 10 |
-
```
|
| 11 |
-
Research → Design → Experiment → Analyze → Research → ...
|
| 12 |
-
```
|
| 13 |
-
|
| 14 |
-
**Rule: Loop until we have a CONFIRMED increase in arc-gen validated score.**
|
| 15 |
-
|
| 16 |
-
| Phase | What | Exit Criteria |
|
| 17 |
-
|-------|------|---------------|
|
| 18 |
-
| **Research** | Read papers, understand theory, find what works in similar regimes | Have a testable hypothesis with cited evidence |
|
| 19 |
-
| **Design** | Write MINIMAL code to test the hypothesis | Code is <200 lines, focused on ONE feature |
|
| 20 |
-
| **Experiment** | Run on representative task sample (≥20 tasks, or all 400 if cheap) | Full arc-gen validation completed |
|
| 21 |
-
| **Analyze** | Compare with/without feature. Measure: tasks solved, arc-gen survival, total score | Data shows >10% improvement in arc-gen survival rate OR total score |
|
| 22 |
-
| **Research** | If failed: why? Read more papers. If succeeded: can we combine with other wins? | Next hypothesis ready |
|
| 23 |
-
|
| 24 |
-
**Critical rules:**
|
| 25 |
-
- NEVER write >200 lines without running them first
|
| 26 |
-
- NEVER claim a feature "works" until arc-gen validated on ≥20 tasks
|
| 27 |
-
- NEVER upload code to repo that hasn't been validated
|
| 28 |
-
- Theory from papers is NOT proof for our data — always test
|
| 29 |
-
- If a feature shows no improvement after testing, DELETE it — don't leave dead code
|
| 30 |
-
- Make surgical edits to individual files — NEVER rewrite the entire codebase in one shot
|
| 31 |
-
|
| 32 |
-
## Quick Reference
|
| 33 |
-
|
| 34 |
-
- **Repo**: `rogermt/neurogolf-solver`
|
| 35 |
-
- **Current version**: v5.2 — 52 solved, ~710 score, est LB ~1058
|
| 36 |
-
- **Previous best on Kaggle**: v4.3 — 50 arc-gen-validated tasks, est LB ~670
|
| 37 |
-
- **Kaggle runtime**: 12 hours for submission
|
| 38 |
-
- **Target**: 3000+ LB (our own solver, no blending)
|
| 39 |
-
- **Detailed history, mistakes, analysis**: see `LEARNING.md`
|
| 40 |
-
- **Roadmap & experiment queue**: see `TODO.md`
|
| 41 |
-
|
| 42 |
-
## 1. Competition Rules
|
| 43 |
-
|
| 44 |
-
| Item | Value |
|
| 45 |
-
|------|-------|
|
| 46 |
-
| Input/Output | `"input"`/`"output"` float32 `[1,10,30,30]` one-hot |
|
| 47 |
-
| Opset | 17 (IR 8). Opset 10 also accepted on Kaggle |
|
| 48 |
-
| **Max .onnx file size** | **1.44 MB per ONNX file** (not submission zip) |
|
| 49 |
-
| Static shapes | **All tensors and parameters must have statically-defined shapes** |
|
| 50 |
-
| Banned ops | **Loop, Scan, NonZero, Unique, Script, Function** |
|
| 51 |
-
| Scoring | `max(1.0, 25.0 - ln(MACs + memory + params))` per task |
|
| 52 |
-
| Tasks | **All 400 count. There are NO excluded tasks. Unsolved = 1.0 pt.** |
|
| 53 |
-
| Validation | Models checked against **train + test + arc-gen** (ALL splits) |
|
| 54 |
-
| Submission | `submission.zip` with `task001.onnx`–`task400.onnx` + optional `submission.csv` |
|
| 55 |
-
|
| 56 |
-
## 2. ARC-GEN Data — THE Critical Factor
|
| 57 |
-
|
| 58 |
-
**A model that passes train+test but fails arc-gen scores ZERO on Kaggle.**
|
| 59 |
-
|
| 60 |
-
- Kaggle tasks at `/kaggle/input/competitions/neurogolf-2026/taskNNN.json` contain `{"train":[], "test":[], "arc-gen":[]}`
|
| 61 |
-
- Up to 262 arc-gen examples per task (100K total)
|
| 62 |
-
- Locally: ARC-GEN in `ARC-GEN-100K/{hex_id}.json` as list of `{input, output}` — merge into task data
|
| 63 |
-
- Conv fitting: include arc-gen examples **only when grid sizes match** train/test (otherwise lstsq fails)
|
| 64 |
-
- Validation: always check against `arc-gen[:30]` minimum
|
| 65 |
-
|
| 66 |
-
## 3. Architecture
|
| 67 |
-
|
| 68 |
-
### Package Structure (v5.2)
|
| 69 |
-
```
|
| 70 |
-
neurogolf_solver/
|
| 71 |
-
├── constants.py # Grid dims, opset, limits (NO excluded tasks)
|
| 72 |
-
├── config.py # Runtime providers, opset factory
|
| 73 |
-
├── data_loader.py # Task loading, one-hot, example extraction
|
| 74 |
-
├── validators.py # Model validation against all splits
|
| 75 |
-
├── profiler.py # Static cost profiler (onnx_tool fallback)
|
| 76 |
-
├── onnx_helpers.py # Opset 17 builders: Slice, Pad, ReduceSum, mk()
|
| 77 |
-
├── gather_helpers.py # Gather-based spatial remapping models
|
| 78 |
-
├── submission.py # run_tasks (W&B logging), zip/csv generation
|
| 79 |
-
├── main.py # Entry point with argparse
|
| 80 |
-
└── solvers/
|
| 81 |
-
├── analytical.py # identity, constant, color_map, transpose
|
| 82 |
-
├── geometric.py # flip, rotate, shift, crop, gravity (detect only)
|
| 83 |
-
├── tiling.py # tile, upscale, mirror, concat, spatial_gather
|
| 84 |
-
├── conv.py # lstsq conv (fixed, variable, diffshape, var_diff) + PCR fallback
|
| 85 |
-
├── gravity.py # Unrolled bubble-sort gravity (Conv+Where, 4 dirs) — Task 78
|
| 86 |
-
├── edge.py # Laplacian edge detection (0 matches currently)
|
| 87 |
-
├── mode.py # Mode fill (ReduceSum→ArgMax→Expand) — Task 129
|
| 88 |
-
└── solver_registry.py # ANALYTICAL_SOLVERS list + solve_task()
|
| 89 |
-
```
|
| 90 |
-
|
| 91 |
-
Run with: `python -m neurogolf_solver.main [args]`
|
| 92 |
-
|
| 93 |
-
### Solver Pipeline
|
| 94 |
-
```
|
| 95 |
-
1. Analytical solvers (instant, zero/low cost, always arc-gen safe):
|
| 96 |
-
identity → constant → color_map → transpose → flip → rotate →
|
| 97 |
-
shift → tile → upscale → kronecker → nonuniform_scale →
|
| 98 |
-
mirror_h → mirror_v → quad_mirror → concat → concat_enhanced →
|
| 99 |
-
diagonal_tile → fixed_crop → spatial_gather → varshape_spatial_gather →
|
| 100 |
-
gravity_unrolled → edge_detect → mode_fill
|
| 101 |
-
|
| 102 |
-
2. Conv solvers (lstsq fitted, validated against arc-gen, PCR fallback):
|
| 103 |
-
conv_fixed — Slice→Conv→ArgMax→Equal+Cast→Pad
|
| 104 |
-
conv_variable — Conv(30×30)→ArgMax→Equal+Cast→Mul(mask)
|
| 105 |
-
conv_diffshape — Slice→Conv→Slice(crop)→ArgMax→Equal+Cast→Pad
|
| 106 |
-
conv_var_diff — Conv(30×30)→ArgMax→Equal+Cast→Mul(input_mask)
|
| 107 |
-
```
|
| 108 |
-
|
| 109 |
-
### ONNX Building Rules (opset 17)
|
| 110 |
-
- **All shapes must be static** — no dynamic dimensions
|
| 111 |
-
- **Max 1.44 MB per .onnx file** — checked by Kaggle validator
|
| 112 |
-
- **Slice(step=-1)** for flip/rotate — zero MACs, replaces Gather for these transforms
|
| 113 |
-
- **Gather** (opset 1) for spatial remapping — used by concat, spatial_gather, mirrors, etc.
|
| 114 |
-
- **NEVER** use GatherElements (opset 11)
|
| 115 |
-
- **Equal+Cast** for one-hot — NEVER use OneHot (no CUDA kernel)
|
| 116 |
-
- **Channel Gather** for permutation color maps (0 MACs, score ~21 vs ~13 for Conv 1×1)
|
| 117 |
-
- **Conv 1×1** for non-permutation color maps (has MACs but correct)
|
| 118 |
-
- **ReduceSum** with axes as **tensor input** (opset 13+ requirement)
|
| 119 |
-
- **Pad** with tensor-based `pads` input (opset 11+ requirement)
|
| 120 |
-
- **lstsq calls** must be wrapped in `try/except (LinAlgError, ValueError)` — SVD can fail to converge
|
| 121 |
-
- **ArgMax + Equal+Cast** before Pad to ensure clean one-hot in padded region (gravity solver lesson)
|
| 122 |
-
|
| 123 |
-
### Conv Fitting
|
| 124 |
-
|
| 125 |
-
**Conv ceiling: ~25 tasks.** Regularization (Ridge, PCA/SVD, skip-ks) all tested and rejected.
|
| 126 |
-
Root cause: architecture mismatch — most unsolved tasks need non-local ops, not local conv patches.
|
| 127 |
-
|
| 128 |
-
**Current fitting strategy (v5.1+):**
|
| 129 |
-
- Composable primitives: `_build_patch_matrix` + `_solve_weights` + `_extract_weights`
|
| 130 |
-
- PCR fallback via `_solve_weights_pcr` (deferred 2nd pass, 0 new solves but no regressions)
|
| 131 |
-
- Kernel sizes: [1,3,5,7,9,11,13,15,17,19,21,23,25,27,29]
|
| 132 |
-
- Try no-bias first, then bias
|
| 133 |
-
- lstsq wrapped in try/except for SVD non-convergence
|
| 134 |
-
- **Validate against arc-gen BEFORE accepting** — reject if fails
|
| 135 |
-
|
| 136 |
-
### New Solver Architectures (v5.2)
|
| 137 |
-
|
| 138 |
-
**gravity.py** — Unrolled bubble-sort via Conv+Where
|
| 139 |
-
- 4 directions × 10 bg colors, max(IH,IW) steps
|
| 140 |
-
- Per step: 2× Conv(3×3 shift), 3× ReduceSum, 3× Greater, 2× And, 2× Where
|
| 141 |
-
- Final: ArgMax + Equal+Cast + Pad (clean one-hot)
|
| 142 |
-
- Cost: ~16M (10×10 grid), score ~8.4
|
| 143 |
-
- **Validated: Task 78 (direction=up, bg=0)**
|
| 144 |
-
|
| 145 |
-
**edge.py** — Laplacian conv boundary detection
|
| 146 |
-
- Conv 1×1 (channel collapse) → Conv 3×3 (Laplacian) → Abs → Greater → And → Where
|
| 147 |
-
- Cost: ~16K MACs, score ~15
|
| 148 |
-
- **0 matches currently** — edge definition may be too strict
|
| 149 |
-
|
| 150 |
-
**mode.py** — Global majority color fill
|
| 151 |
-
- Slice → ReduceSum(axes=[2,3]) → ArgMax → Equal+Cast → Expand → Pad
|
| 152 |
-
- Cost: ~2K, score ~19.5
|
| 153 |
-
- **Validated: Task 129**
|
| 154 |
-
|
| 155 |
-
## 4. Performance
|
| 156 |
-
|
| 157 |
-
**The lstsq conv solver is the speed bottleneck.** Use `--conv_budget` to cap time per task (5s locally, 60s on Kaggle).
|
| 158 |
-
|
| 159 |
-
**Do NOT** try to GPU-accelerate lstsq. The bottleneck is algorithmic (O(n³) SVD), not device.
|
| 160 |
-
|
| 161 |
-
## 5. Score Accounting (v5.2)
|
| 162 |
-
|
| 163 |
-
| Category | Tasks | Avg Score | Notes |
|
| 164 |
-
|----------|-------|-----------|-------|
|
| 165 |
-
| Analytical | 24 | ~16 | identity, constant, color_map, transpose, flip, rotate, shift, tile, mirrors, etc. |
|
| 166 |
-
| Conv (lstsq) | 25 | ~10.5 | conv_fixed, conv_var, conv_diff, conv_var_diff |
|
| 167 |
-
| Gravity | 1 | 8.4 | Task 78 |
|
| 168 |
-
| Mode fill | 1 | 19.5 | Task 129 |
|
| 169 |
-
| Timing artifact | 1 | 8.2 | Task 61 (conv_var, only on slow hardware) |
|
| 170 |
-
| **Unsolved** | **348** | **1.0** | Minimum score |
|
| 171 |
-
| **Total** | **52/400** | | **~710 solved + 348 = ~1058 est LB** |
|
| 172 |
-
|
| 173 |
-
### Path to 3000+
|
| 174 |
-
1. ✅ ARC-GEN validation (v4)
|
| 175 |
-
2. ✅ New analytical solvers (v4)
|
| 176 |
-
3. ✅ Opset 17 Slice-based transforms (v5)
|
| 177 |
-
4. ✅ lstsq crash fix + modular package (v5)
|
| 178 |
-
5. ✅ PCR fallback in conv (v5.1 — 0 new solves but clean code)
|
| 179 |
-
6. ✅ Gravity solver (v5.2 — Task 78)
|
| 180 |
-
7. ✅ Mode fill solver (v5.2 — Task 129)
|
| 181 |
-
8. 🔲 **Phase 3 solvers**: flood fill, composition, color LUT, CumSum — see TODO.md
|
| 182 |
-
9. 🔲 **Phase 1a**: Opset 17 conversions for existing analytical tasks (score optimization)
|
| 183 |
-
10. 🔲 **Phase 4**: ONNX optimizer, best-of-N selection
|
| 184 |
-
|
| 185 |
-
**Blending is EXPLICITLY excluded** — user's competitive philosophy.
|
| 186 |
-
|
| 187 |
-
## 6. Submission Checklist
|
| 188 |
-
|
| 189 |
-
Before submitting to Kaggle:
|
| 190 |
-
- [ ] All models validated against train + test + arc-gen (locally)
|
| 191 |
-
- [ ] **All 400 tasks attempted** (no exclusions)
|
| 192 |
-
- [ ] No GatherElements in any model
|
| 193 |
-
- [ ] No banned ops (Loop, Scan, NonZero, Unique, Script, Function)
|
| 194 |
-
- [ ] All tensor shapes are static
|
| 195 |
-
- [ ] **Each .onnx file < 1.44 MB**
|
| 196 |
-
- [ ] Local estimated score calculated and compared to expected LB
|
| 197 |
-
- [ ] **A/B test**: ran both old and new solver on same tasks, new solver scores higher
|
| 198 |
-
|
| 199 |
-
## 7. Files & Locations
|
| 200 |
-
|
| 201 |
-
| Location | Path | Notes |
|
| 202 |
-
|----------|------|-------|
|
| 203 |
-
| HF Repo | `rogermt/neurogolf-solver` | All code + data |
|
| 204 |
-
| **Solver package** | `neurogolf_solver/` | **v5.2 — 19 files, modular** |
|
| 205 |
-
| Legacy monolith | `neurogolf_solver.py` | v4, kept for reference — do not edit |
|
| 206 |
-
| Official utils | `neurogolf_utils.py` | Kaggle scoring lib (needs onnx_tool) |
|
| 207 |
-
| ARC-GEN data | `ARC-GEN-100K.zip` | 400 files, 100K examples |
|
| 208 |
-
| Notebooks | `neurogolf-2026-solver-notebooks.zip` | 5 reference notebooks |
|
| 209 |
-
| Kaggle data | `/kaggle/input/competitions/neurogolf-2026/` | task JSONs with arc-gen |
|
| 210 |
-
| Roadmap | `TODO.md` | Experiment queue with status key |
|
| 211 |
-
| Learning | `LEARNING.md` | Knowledge accumulation — read before coding |
|
| 212 |
-
|
| 213 |
-
## 8. LEARNING.md Maintenance Rules
|
| 214 |
-
|
| 215 |
-
`LEARNING.md` is the knowledge accumulation file. Update it when:
|
| 216 |
-
- A bug is found and fixed — add to Mistakes Log with root cause
|
| 217 |
-
- A new approach is tried — record what worked, what didn't, and why
|
| 218 |
-
- Competition analysis reveals new insights — add to Competitive Intelligence
|
| 219 |
-
- Version milestones — update the Version History table
|
| 220 |
-
- Performance measurements — add concrete numbers
|
| 221 |
-
|
| 222 |
-
Structure: chronological within sections, newest entries first. Always include dates and version numbers.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
TODO.md
DELETED
|
@@ -1,188 +0,0 @@
|
|
| 1 |
-
# NeuroGolf Solver — Roadmap
|
| 2 |
-
|
| 3 |
-
> Current: v5.2 · 51 Kaggle validated · LB 594.84 · Target: 3000+
|
| 4 |
-
> Philosophy: **Research → Design → Experiment → Analyze → Research** loop until confirmed score increase.
|
| 5 |
-
> Rule: **NEVER claim a feature works without full arc-gen validation on representative tasks.**
|
| 6 |
-
> Updated: 2026-04-27 — LB 594.84 confirmed. Phase 3 redesigned from expert review + literature.
|
| 7 |
-
> **All 400 tasks count. There are NO excluded tasks. Unsolved = 1.0 pt (Kaggle adds automatically).**
|
| 8 |
-
|
| 9 |
-
---
|
| 10 |
-
|
| 11 |
-
## Current Solver Breakdown (51/400 solved, LB 594.84)
|
| 12 |
-
|
| 13 |
-
| Category | Tasks | Solvers |
|
| 14 |
-
|----------|-------|---------|
|
| 15 |
-
| Conv (lstsq) | 25 | conv_fixed, conv_var, conv_diff, conv_var_diff |
|
| 16 |
-
| Analytical | 24 | identity, constant, color_map, transpose, flip, rotate, shift, tile, upscale, mirror, concat, spatial_gather, etc. |
|
| 17 |
-
| Gravity | 1 | gravity_unrolled (Task 78) |
|
| 18 |
-
| Mode fill | 1 | mode_fill (Task 129) |
|
| 19 |
-
| **Unsolved** | **349** | — |
|
| 20 |
-
|
| 21 |
-
---
|
| 22 |
-
|
| 23 |
-
## Phase 1: Score Optimization on Existing Tasks
|
| 24 |
-
|
| 25 |
-
### 1a: Opset 17 Slice-Based Analytical Solvers ⬜
|
| 26 |
-
> Convert Gather-based solvers to Slice(step=-1) + Transpose for ~0 MACs.
|
| 27 |
-
|
| 28 |
-
### 1b: ONNX Optimizer Pass ⬜
|
| 29 |
-
> `onnxoptimizer.optimize()` for dead-code elimination.
|
| 30 |
-
|
| 31 |
-
---
|
| 32 |
-
|
| 33 |
-
## Phase 2: Regularization — EXHAUSTED
|
| 34 |
-
|
| 35 |
-
> Exps 0-3 tested. Architecture mismatch, not overfitting. Conv ceiling = ~25 tasks.
|
| 36 |
-
|
| 37 |
-
---
|
| 38 |
-
|
| 39 |
-
## Phase 3: New Solver Types
|
| 40 |
-
|
| 41 |
-
> Organized by architecture type. Each solver is a separate .py file.
|
| 42 |
-
> **Build rule:** Scan for matches FIRST, build only what has hits, validate on arc-gen.
|
| 43 |
-
|
| 44 |
-
---
|
| 45 |
-
|
| 46 |
-
### Category A: Static Spatial Remapping (Gather/Slice/Pad)
|
| 47 |
-
|
| 48 |
-
These are cheap, zero/low-MAC solvers that use precomputed index mappings. Highest score per task. Build these first.
|
| 49 |
-
|
| 50 |
-
| # | Solver | Pattern | Key Ops | Status |
|
| 51 |
-
|---|--------|---------|---------|--------|
|
| 52 |
-
| A1 | `extract_inner` | Remove N-pixel border frame → smaller output | Gather | ⬜ |
|
| 53 |
-
| A2 | `add_border` | Add constant-color border → larger output | Gather+const | ⬜ |
|
| 54 |
-
| A3 | `pad_align` | Input pasted into larger canvas at fixed offset | Gather+const | ⬜ |
|
| 55 |
-
| A4 | `downsample_stride` | `out[r,c] = inp[r*sH, c*sW]` | Gather | ⬜ |
|
| 56 |
-
| A5 | `extract_and_tile` | Find smallest repeating unit, tile to fill output | Gather | ⬜ |
|
| 57 |
-
| A6 | `sparse_fill` | Each non-zero pixel becomes NxN block | Gather | ⬜ |
|
| 58 |
-
| A7 | `symmetry_complete` | Mirror sparse data to complete L-R or T-B symmetry | Gather | ⬜ |
|
| 59 |
-
| A8 | `multi_stamp` | Union of shifted copies of input at fixed offsets | Gather+Add | ⬜ |
|
| 60 |
-
| A9 | `affine_remap` | General integer coordinate remap: stride+offset, axis swap | Gather | ⬜ |
|
| 61 |
-
| A10 | `crop_paste` | Crop from input, paste at different position in output | Gather+const | ⬜ |
|
| 62 |
-
|
| 63 |
-
---
|
| 64 |
-
|
| 65 |
-
### Category B: Channel/Color Operations
|
| 66 |
-
|
| 67 |
-
Color-level transforms that work in the 10-channel one-hot space.
|
| 68 |
-
|
| 69 |
-
| # | Solver | Pattern | Key Ops | Status |
|
| 70 |
-
|---|--------|---------|---------|--------|
|
| 71 |
-
| B1 | `channel_filter` | Keep only certain colors, rest → background | Mul(mask [1,10,1,1]) | ⬜ |
|
| 72 |
-
| B2 | `overlay_constant` | Input + fixed pixel pattern overlaid | Add or Where + constant tensor | ⬜ |
|
| 73 |
-
| B3 | `fill_bg_with_mode` | Background pixels filled with dominant color, non-bg unchanged | ReduceSum→ArgMax→Where | ⬜ |
|
| 74 |
-
| B4 | `row_mode_fill` | Each row filled with its dominant color | ReduceSum(width)→ArgMax→Tile(width) | ⬜ |
|
| 75 |
-
| B5 | `col_mode_fill` | Each column filled with its dominant color | ReduceSum(height)→ArgMax→Tile(height) | ⬜ |
|
| 76 |
-
|
| 77 |
-
---
|
| 78 |
-
|
| 79 |
-
### Category C: Composition / Chaining
|
| 80 |
-
|
| 81 |
-
Chain two existing solvers. If transform(input) → intermediate, and color_map(intermediate) → output, emit one combined graph.
|
| 82 |
-
|
| 83 |
-
| # | Solver | Pattern | Key Ops | Status |
|
| 84 |
-
|---|--------|---------|---------|--------|
|
| 85 |
-
| C1 | `transform_then_recolor` | rotate/flip/transpose + color_map | Chain existing | ⬜ |
|
| 86 |
-
| C2 | `crop_then_transform` | fixed_crop + rotate/flip | Chain existing | ⬜ |
|
| 87 |
-
| C3 | `recolor_then_tile` | color_map + tile/upscale | Chain existing | ⬜ |
|
| 88 |
-
|
| 89 |
-
---
|
| 90 |
-
|
| 91 |
-
### Category D: Unrolled Propagation (Conv+Where loops)
|
| 92 |
-
|
| 93 |
-
Dynamic solvers that need N unrolled steps. Higher MAC cost (~8-12 score).
|
| 94 |
-
|
| 95 |
-
| # | Solver | Pattern | Key Ops | Status |
|
| 96 |
-
|---|--------|---------|---------|--------|
|
| 97 |
-
| D1 | `gravity_unrolled` | Directional compaction, 4 dirs × 10 bg colors | Conv+Where ×N steps | ✅ Task 78 |
|
| 98 |
-
| D2 | `flood_fill` | BFS: seed spreads through passable cells | Conv+Clip+Mul ×N steps | ⬜ |
|
| 99 |
-
| D3 | `edge_detect` | Laplacian/Sobel boundary detection | Conv(3×3)+Abs+Greater | ✅ built, 0 matches |
|
| 100 |
-
|
| 101 |
-
---
|
| 102 |
-
|
| 103 |
-
### Category E: Global Aggregation
|
| 104 |
-
|
| 105 |
-
Solvers that compute a global statistic and broadcast it.
|
| 106 |
-
|
| 107 |
-
| # | Solver | Pattern | Key Ops | Status |
|
| 108 |
-
|---|--------|---------|---------|--------|
|
| 109 |
-
| E1 | `mode_fill` | Output = solid fill of most common input color | ReduceSum→ArgMax→Expand | ✅ Task 129 |
|
| 110 |
-
| E2 | `cumsum_fill` | Running sums for object extent, directional filling | CumSum | ⬜ |
|
| 111 |
-
| E3 | `bbox_crop_pad` | Find bounding box via ReduceSum+ArgMax, crop+pad | ReduceSum→ArgMax→Slice→Pad | ⬜ |
|
| 112 |
-
|
| 113 |
-
---
|
| 114 |
-
|
| 115 |
-
### Build Order (highest expected ROI first)
|
| 116 |
-
|
| 117 |
-
**Wave 1 — Static remapping (Category A):** Cheapest to build, highest score per task, most likely to have matches. ~1 day.
|
| 118 |
-
1. A1 `extract_inner` + A2 `add_border` (border ops)
|
| 119 |
-
2. A5 `extract_and_tile` + A6 `sparse_fill` (pattern ops)
|
| 120 |
-
3. A3 `pad_align` + A4 `downsample_stride` (placement ops)
|
| 121 |
-
4. A7 `symmetry_complete` (symmetry)
|
| 122 |
-
|
| 123 |
-
**Wave 2 — Color/channel ops (Category B):** Builds on mode_fill. ~0.5 day.
|
| 124 |
-
5. B1 `channel_filter` + B3 `fill_bg_with_mode`
|
| 125 |
-
6. B4 `row_mode_fill` + B5 `col_mode_fill`
|
| 126 |
-
|
| 127 |
-
**Wave 3 — Composition (Category C):** Chains existing solvers, no new ONNX ops. ~0.5 day.
|
| 128 |
-
7. C1 `transform_then_recolor`
|
| 129 |
-
|
| 130 |
-
**Wave 4 — Propagation (Category D):** More complex, lower score. ~1 day.
|
| 131 |
-
8. D2 `flood_fill`
|
| 132 |
-
|
| 133 |
-
**Wave 5 — Global aggregation (Category E):** Needs careful design. ~1 day.
|
| 134 |
-
9. E2 `cumsum_fill` + E3 `bbox_crop_pad`
|
| 135 |
-
|
| 136 |
-
---
|
| 137 |
-
|
| 138 |
-
### Honest Projections
|
| 139 |
-
|
| 140 |
-
I will NOT repeat the Phase 2 mistake of projecting fantasy numbers. Here's what I know:
|
| 141 |
-
|
| 142 |
-
- **51 tasks solved today.** LB 594.84.
|
| 143 |
-
- **Each Wave:** Might add 2-10 tasks. Might add 0. We don't know until we scan and test.
|
| 144 |
-
- **The only reliable estimate:** Gravity added 1 task. Mode fill added 1 task. Edge detect added 0. Hit rate so far: ~1 new task per solver built.
|
| 145 |
-
- **If hit rate holds:** 20 new solvers × ~1 task each = ~20 new tasks → ~70 solved → LB ~800-900.
|
| 146 |
-
- **If some solvers hit 5+ tasks:** Could reach 100-120 solved → LB ~1200-1500.
|
| 147 |
-
- **3000+ requires a fundamentally different approach** (test-time training, learned architectures) that we're not doing.
|
| 148 |
-
|
| 149 |
-
| Scenario | Solved | Est LB | Confidence |
|
| 150 |
-
|----------|--------|--------|------------|
|
| 151 |
-
| Wave 1 only | 55-65 | 650-800 | 60% |
|
| 152 |
-
| Wave 1+2 | 60-75 | 750-950 | 50% |
|
| 153 |
-
| Wave 1+2+3 | 65-85 | 850-1100 | 40% |
|
| 154 |
-
| All waves | 70-120 | 900-1500 | 30% |
|
| 155 |
-
|
| 156 |
-
---
|
| 157 |
-
|
| 158 |
-
## Phase 4: Score Optimization
|
| 159 |
-
|
| 160 |
-
### 4a: Best-of-N Model Selection ⬜
|
| 161 |
-
### 4b: Official Scoring Alignment (onnx_tool) ⬜
|
| 162 |
-
|
| 163 |
-
---
|
| 164 |
-
|
| 165 |
-
## BLENDING — EXPLICITLY EXCLUDED
|
| 166 |
-
|
| 167 |
-
---
|
| 168 |
-
|
| 169 |
-
## Experiment Log
|
| 170 |
-
|
| 171 |
-
| Date | Experiment | Result | Decision |
|
| 172 |
-
|------|-----------|--------|----------|
|
| 173 |
-
| 2026-04-24 | v4.2 baseline | 50 arc-gen, LB ~501 | Baseline |
|
| 174 |
-
| 2026-04-26 | v5.0 refactor | 49 solved, ~604 score | New baseline |
|
| 175 |
-
| 2026-04-26 | Exp 1-3 (regularization) | 0 improvement | **EXHAUSTED** |
|
| 176 |
-
| 2026-04-26 | v5.2 gravity+mode | +2 tasks (78, 129) | ✅ Kept |
|
| 177 |
-
| 2026-04-27 | **v5.2 Kaggle submission** | **51 solved, LB 594.84** | **Current best** |
|
| 178 |
-
|
| 179 |
-
---
|
| 180 |
-
|
| 181 |
-
## Research Queue
|
| 182 |
-
|
| 183 |
-
1. ✅ CompressARC — CumMax/ReduceSum architecture
|
| 184 |
-
2. ✅ TRM — recursive reasoning
|
| 185 |
-
3. ✅ ARC Prize 2025 Tech Report
|
| 186 |
-
4. ✅ Expert review #1 — Phase 3 solver list (pad_align, crop_paste, downsample, etc.)
|
| 187 |
-
5. ✅ Expert review #2 — 6 concrete solvers with code (extract_inner, add_border, etc.)
|
| 188 |
-
6. [ ] **Task taxonomy scan** — for each Wave 1 solver, count matching unsolved tasks before building
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
neurogolf_utils.py
DELETED
|
@@ -1,359 +0,0 @@
|
|
| 1 |
-
# Copyright 2026 Google LLC
|
| 2 |
-
#
|
| 3 |
-
# Licensed under the Apache License, Version 2.0 (the "License");
|
| 4 |
-
# you may not use this file except in compliance with the License.
|
| 5 |
-
# You may obtain a copy of the License at
|
| 6 |
-
#
|
| 7 |
-
# https://www.apache.org/licenses/LICENSE-2.0
|
| 8 |
-
#
|
| 9 |
-
# Unless required by applicable law or agreed to in writing, software
|
| 10 |
-
# distributed under the License is distributed on an "AS IS" BASIS,
|
| 11 |
-
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
| 12 |
-
# See the License for the specific language governing permissions and
|
| 13 |
-
# limitations under the License.
|
| 14 |
-
|
| 15 |
-
"""Module containing utilities for the IJCAI-ECAI 2026 NeuroGolf Challenge."""
|
| 16 |
-
|
| 17 |
-
import itertools
|
| 18 |
-
import json
|
| 19 |
-
import math
|
| 20 |
-
import pathlib
|
| 21 |
-
import traceback
|
| 22 |
-
|
| 23 |
-
import IPython.display
|
| 24 |
-
import matplotlib.pyplot as plt
|
| 25 |
-
import numpy as np
|
| 26 |
-
import onnx
|
| 27 |
-
import onnx_tool
|
| 28 |
-
import onnxruntime
|
| 29 |
-
|
| 30 |
-
|
| 31 |
-
display = IPython.display.display
|
| 32 |
-
FileLink = IPython.display.FileLink
|
| 33 |
-
|
| 34 |
-
_BATCH_SIZE, _CHANNELS, _HEIGHT, _WIDTH = 1, 10, 30, 30
|
| 35 |
-
_NEUROGOLF_DIR = "/kaggle/input/competitions/neurogolf-2026/"
|
| 36 |
-
_COLORS = [
|
| 37 |
-
(0, 0, 0),
|
| 38 |
-
(30, 147, 255),
|
| 39 |
-
(250, 61, 49),
|
| 40 |
-
(78, 204, 48),
|
| 41 |
-
(255, 221, 0),
|
| 42 |
-
(153, 153, 153),
|
| 43 |
-
(229, 59, 163),
|
| 44 |
-
(255, 133, 28),
|
| 45 |
-
(136, 216, 241),
|
| 46 |
-
(147, 17, 49),
|
| 47 |
-
(240, 240, 240),
|
| 48 |
-
(146, 117, 86)
|
| 49 |
-
]
|
| 50 |
-
_DATA_TYPE = onnx.TensorProto.FLOAT
|
| 51 |
-
_EXCLUDED_OP_TYPES = ["LOOP", "SCAN", "NONZERO", "UNIQUE", "SCRIPT", "FUNCTION"]
|
| 52 |
-
_FILESIZE_LIMIT_IN_BYTES = 1.44 * 1024 * 1024
|
| 53 |
-
_GRID_SHAPE = [_BATCH_SIZE, _CHANNELS, _HEIGHT, _WIDTH]
|
| 54 |
-
_IR_VERSION, _OPSET_IMPORTS = 10, [onnx.helper.make_opsetid("", 10)]
|
| 55 |
-
_TASK_ZERO = {
|
| 56 |
-
"train": [{
|
| 57 |
-
"input": [
|
| 58 |
-
[5, 5, 5, 5, 5, 5, 5, 5, 5, 5],
|
| 59 |
-
[5, 1, 1, 1, 1, 1, 1, 5, 5, 5],
|
| 60 |
-
[5, 1, 1, 1, 1, 1, 1, 5, 5, 5],
|
| 61 |
-
[5, 1, 1, 1, 1, 1, 1, 5, 5, 5],
|
| 62 |
-
[5, 1, 1, 1, 1, 1, 1, 5, 5, 5],
|
| 63 |
-
[5, 1, 1, 1, 1, 1, 1, 5, 5, 5],
|
| 64 |
-
[5, 1, 1, 1, 1, 1, 1, 5, 5, 5],
|
| 65 |
-
[5, 5, 5, 5, 5, 5, 5, 5, 5, 5],
|
| 66 |
-
[5, 5, 5, 5, 5, 5, 5, 5, 5, 5],
|
| 67 |
-
[5, 5, 5, 5, 5, 5, 5, 5, 5, 5],
|
| 68 |
-
],
|
| 69 |
-
"output": [
|
| 70 |
-
[5, 5, 5, 5, 5, 5, 5, 5, 5, 5],
|
| 71 |
-
[5, 1, 1, 1, 1, 1, 1, 5, 5, 5],
|
| 72 |
-
[5, 1, 1, 1, 1, 1, 1, 0, 5, 5],
|
| 73 |
-
[5, 1, 1, 1, 1, 1, 1, 0, 5, 5],
|
| 74 |
-
[5, 1, 1, 1, 1, 1, 1, 0, 5, 5],
|
| 75 |
-
[5, 1, 1, 1, 1, 1, 1, 0, 5, 5],
|
| 76 |
-
[5, 1, 1, 1, 1, 1, 1, 0, 5, 5],
|
| 77 |
-
[5, 5, 0, 0, 0, 0, 0, 0, 5, 5],
|
| 78 |
-
[5, 5, 5, 5, 5, 5, 5, 5, 5, 5],
|
| 79 |
-
[5, 5, 5, 5, 5, 5, 5, 5, 5, 5],
|
| 80 |
-
],
|
| 81 |
-
}],
|
| 82 |
-
"test": [{
|
| 83 |
-
"input": [
|
| 84 |
-
[5, 5, 5, 5, 5, 5, 5, 5, 5, 5],
|
| 85 |
-
[5, 5, 4, 4, 4, 4, 4, 4, 5, 5],
|
| 86 |
-
[5, 5, 4, 4, 4, 4, 4, 4, 5, 5],
|
| 87 |
-
[5, 5, 5, 5, 5, 5, 5, 5, 5, 5],
|
| 88 |
-
[5, 5, 5, 5, 5, 5, 5, 5, 5, 5],
|
| 89 |
-
[5, 5, 4, 4, 4, 4, 4, 5, 5, 5],
|
| 90 |
-
[5, 5, 4, 5, 5, 5, 4, 5, 5, 5],
|
| 91 |
-
[5, 5, 4, 5, 5, 5, 4, 5, 5, 5],
|
| 92 |
-
[5, 5, 4, 4, 4, 4, 4, 5, 5, 5],
|
| 93 |
-
[5, 5, 5, 5, 5, 5, 5, 5, 5, 5],
|
| 94 |
-
],
|
| 95 |
-
"output": [
|
| 96 |
-
[5, 5, 5, 5, 5, 5, 5, 5, 5, 5],
|
| 97 |
-
[5, 5, 4, 4, 4, 4, 4, 4, 5, 5],
|
| 98 |
-
[5, 5, 4, 4, 4, 4, 4, 4, 0, 5],
|
| 99 |
-
[5, 5, 5, 0, 0, 0, 0, 0, 0, 5],
|
| 100 |
-
[5, 5, 5, 5, 5, 5, 5, 5, 5, 5],
|
| 101 |
-
[5, 5, 4, 4, 4, 4, 4, 5, 5, 5],
|
| 102 |
-
[5, 5, 4, 0, 0, 0, 4, 0, 5, 5],
|
| 103 |
-
[5, 5, 4, 0, 5, 5, 4, 0, 5, 5],
|
| 104 |
-
[5, 5, 4, 4, 4, 4, 4, 0, 5, 5],
|
| 105 |
-
[5, 5, 5, 0, 0, 0, 0, 0, 5, 5],
|
| 106 |
-
],
|
| 107 |
-
}],
|
| 108 |
-
"arc-gen": [{
|
| 109 |
-
"input": [
|
| 110 |
-
[5, 5, 5, 5, 5, 5, 5, 5, 5, 5],
|
| 111 |
-
[5, 5, 5, 5, 5, 5, 5, 5, 5, 5],
|
| 112 |
-
[5, 5, 2, 2, 2, 2, 2, 2, 5, 5],
|
| 113 |
-
[5, 5, 2, 5, 5, 5, 5, 2, 5, 5],
|
| 114 |
-
[5, 5, 2, 5, 5, 5, 5, 2, 5, 5],
|
| 115 |
-
[5, 5, 2, 5, 5, 5, 5, 2, 5, 5],
|
| 116 |
-
[5, 5, 2, 5, 5, 5, 5, 2, 5, 5],
|
| 117 |
-
[5, 5, 2, 2, 2, 2, 2, 2, 5, 5],
|
| 118 |
-
[5, 5, 5, 5, 5, 5, 5, 5, 5, 5],
|
| 119 |
-
[5, 5, 5, 5, 5, 5, 5, 5, 5, 5],
|
| 120 |
-
],
|
| 121 |
-
"output": [
|
| 122 |
-
[5, 5, 5, 5, 5, 5, 5, 5, 5, 5],
|
| 123 |
-
[5, 5, 5, 5, 5, 5, 5, 5, 5, 5],
|
| 124 |
-
[5, 5, 2, 2, 2, 2, 2, 2, 5, 5],
|
| 125 |
-
[5, 5, 2, 0, 0, 0, 0, 2, 0, 5],
|
| 126 |
-
[5, 5, 2, 0, 5, 5, 5, 2, 0, 5],
|
| 127 |
-
[5, 5, 2, 0, 5, 5, 5, 2, 0, 5],
|
| 128 |
-
[5, 5, 2, 0, 5, 5, 5, 2, 0, 5],
|
| 129 |
-
[5, 5, 2, 2, 2, 2, 2, 2, 0, 5],
|
| 130 |
-
[5, 5, 5, 0, 0, 0, 0, 0, 0, 5],
|
| 131 |
-
[5, 5, 5, 5, 5, 5, 5, 5, 5, 5],
|
| 132 |
-
],
|
| 133 |
-
}],
|
| 134 |
-
}
|
| 135 |
-
|
| 136 |
-
|
| 137 |
-
def check_network(filename):
|
| 138 |
-
file_path = pathlib.Path(filename)
|
| 139 |
-
if not file_path.is_file():
|
| 140 |
-
print(f"Error: File {filename} does not exist.")
|
| 141 |
-
return False
|
| 142 |
-
if (filesize := file_path.stat().st_size) > _FILESIZE_LIMIT_IN_BYTES:
|
| 143 |
-
print(f"Error: Filesize {filesize} exceeds {_FILESIZE_LIMIT_IN_BYTES}.")
|
| 144 |
-
return False
|
| 145 |
-
return True
|
| 146 |
-
|
| 147 |
-
|
| 148 |
-
def convert_to_numpy(example):
|
| 149 |
-
benchmark = {}
|
| 150 |
-
example_shape = (1, _CHANNELS, _HEIGHT, _WIDTH)
|
| 151 |
-
for mode in ["input", "output"]:
|
| 152 |
-
benchmark[mode] = np.zeros(example_shape, dtype=np.float32)
|
| 153 |
-
grid = example[mode]
|
| 154 |
-
if max(len(grid), len(grid[0])) > 30: return None
|
| 155 |
-
for r, _ in enumerate(grid):
|
| 156 |
-
for c, color in enumerate(grid[r]):
|
| 157 |
-
benchmark[mode][0][color][r][c] = 1.0
|
| 158 |
-
return benchmark
|
| 159 |
-
|
| 160 |
-
|
| 161 |
-
def convert_from_numpy(benchmark):
|
| 162 |
-
example = []
|
| 163 |
-
_, channels, height, width = benchmark.shape
|
| 164 |
-
for row in range(height):
|
| 165 |
-
cells = []
|
| 166 |
-
for col in range(width):
|
| 167 |
-
colors = [c for c in range(channels) if benchmark[0][c][row][col] == 1]
|
| 168 |
-
cells.append(colors[0] if len(colors) == 1 else (11 if colors else 10))
|
| 169 |
-
while cells and cells[-1] == 10:
|
| 170 |
-
cells.pop(-1)
|
| 171 |
-
example.append(cells)
|
| 172 |
-
while example and not example[-1]:
|
| 173 |
-
example.pop(-1)
|
| 174 |
-
return example
|
| 175 |
-
|
| 176 |
-
|
| 177 |
-
def score_network(m):
|
| 178 |
-
model = onnx_tool.loadmodel(m, {'verbose': False})
|
| 179 |
-
g = model.graph
|
| 180 |
-
g.graph_reorder_nodes()
|
| 181 |
-
g.shape_infer(None)
|
| 182 |
-
g.profile()
|
| 183 |
-
if not g.valid_profile:
|
| 184 |
-
print("Error: Invalid profile.")
|
| 185 |
-
return None, None, None
|
| 186 |
-
for key in g.nodemap.keys():
|
| 187 |
-
if g.nodemap[key].op_type.upper() in _EXCLUDED_OP_TYPES:
|
| 188 |
-
print(f"Error: Op type {g.nodemap[key].op_type} is not permitted.")
|
| 189 |
-
return None, None, None
|
| 190 |
-
if g.nodemap[key].memory < 0:
|
| 191 |
-
print(f"Error: Negative memory value detected.")
|
| 192 |
-
return None, None, None
|
| 193 |
-
return int(sum(g.macs)), int(g.memory), int(g.params)
|
| 194 |
-
|
| 195 |
-
|
| 196 |
-
def load_examples(task_num):
|
| 197 |
-
"""Loads relevant data from ARC-AGI and ARC-GEN."""
|
| 198 |
-
if not task_num:
|
| 199 |
-
return _TASK_ZERO
|
| 200 |
-
with open(_NEUROGOLF_DIR + f"task{task_num:03d}.json") as f:
|
| 201 |
-
examples = json.load(f)
|
| 202 |
-
return examples
|
| 203 |
-
|
| 204 |
-
|
| 205 |
-
def run_network(session, benchmark_input):
|
| 206 |
-
result = session.run(["output"], {"input": benchmark_input})
|
| 207 |
-
return (result[0] > 0.0).astype(float)
|
| 208 |
-
|
| 209 |
-
|
| 210 |
-
def show_examples(examples, bgcolor=(255, 255, 255)):
|
| 211 |
-
# Determine the dimensions of the image to be rendered.
|
| 212 |
-
width, height, offset = 0, 0, 1
|
| 213 |
-
for example in examples:
|
| 214 |
-
grid, output = example["input"], example["output"]
|
| 215 |
-
width += len(grid[0]) + 1 + len(output[0]) + 4
|
| 216 |
-
height = max(height, max(len(grid), len(output)) + 4)
|
| 217 |
-
# Determine the contents of the image.
|
| 218 |
-
image = [[bgcolor for _ in range(width)] for _ in range(height)]
|
| 219 |
-
for example in examples:
|
| 220 |
-
grid, output = example["input"], example["output"]
|
| 221 |
-
grid_width, output_width = len(grid[0]), len(output[0])
|
| 222 |
-
for r, row in enumerate(grid):
|
| 223 |
-
for c, cell in enumerate(row):
|
| 224 |
-
image[r + 2][offset + c + 1] = _COLORS[cell]
|
| 225 |
-
offset += grid_width + 1
|
| 226 |
-
for r, row in enumerate(output):
|
| 227 |
-
for c, cell in enumerate(row):
|
| 228 |
-
image[r + 2][offset + c + 1] = _COLORS[cell]
|
| 229 |
-
offset += output_width + 4
|
| 230 |
-
# Draw the image.
|
| 231 |
-
fig = plt.figure(figsize=(10, 5))
|
| 232 |
-
ax = fig.add_axes([0, 0, 1, 1])
|
| 233 |
-
ax.imshow(np.array(image))
|
| 234 |
-
# Draw the horizontal and vertical lines.
|
| 235 |
-
offset = 1
|
| 236 |
-
for example in examples:
|
| 237 |
-
grid, output = example["input"], example["output"]
|
| 238 |
-
grid_width, grid_height = len(grid[0]), len(grid)
|
| 239 |
-
output_width, output_height = len(output[0]), len(output)
|
| 240 |
-
ax.hlines([r + 1.5 for r in range(grid_height+1)],
|
| 241 |
-
xmin=offset+0.5, xmax=offset+grid_width+0.5, color="black")
|
| 242 |
-
ax.vlines([offset + c + 0.5 for c in range(grid_width+1)],
|
| 243 |
-
ymin=1.5, ymax=grid_height+1.5, color="black")
|
| 244 |
-
offset += grid_width + 1
|
| 245 |
-
ax.hlines([r + 1.5 for r in range(output_height+1)],
|
| 246 |
-
xmin=offset+0.5, xmax=offset+output_width+0.5, color="black")
|
| 247 |
-
ax.vlines([offset + c + 0.5 for c in range(output_width+1)],
|
| 248 |
-
ymin=1.5, ymax=output_height+1.5, color="black")
|
| 249 |
-
offset += output_width + 2
|
| 250 |
-
ax.vlines([offset+0.5], ymin=-0.5, ymax=height-0.5, color="black")
|
| 251 |
-
offset += 2
|
| 252 |
-
ax.set_xticks([])
|
| 253 |
-
ax.set_yticks([])
|
| 254 |
-
|
| 255 |
-
|
| 256 |
-
def show_legend():
|
| 257 |
-
image = [[(255, 255, 255) for _ in range(21)] for _ in range(5)]
|
| 258 |
-
for idx, color in enumerate(_COLORS[:10]):
|
| 259 |
-
image[1][2 * idx + 1] = color
|
| 260 |
-
for idx, color in enumerate(_COLORS[10:]):
|
| 261 |
-
for col in range(3):
|
| 262 |
-
image[3][12 * idx + col + 3] = color
|
| 263 |
-
fig = plt.figure(figsize=(10, 5))
|
| 264 |
-
ax = fig.add_axes([0, 0, 1, 1])
|
| 265 |
-
ax.imshow(np.array(image))
|
| 266 |
-
for idx, _ in enumerate(_COLORS[:10]):
|
| 267 |
-
color = "white" if idx in [0, 9] else "black"
|
| 268 |
-
ax.text(2 * idx + 0.9, 1.1, str(idx), color=color)
|
| 269 |
-
ax.text(3.4, 3.1, "no color", color="black")
|
| 270 |
-
ax.text(5.75, 3.1, "<--- special colors to indicate one-hot encoding errors --->", color="black")
|
| 271 |
-
ax.text(14.85, 3.1, "too many colors", color="white")
|
| 272 |
-
ax.set_xticks([])
|
| 273 |
-
ax.set_yticks([])
|
| 274 |
-
|
| 275 |
-
|
| 276 |
-
def single_layer_conv2d_network(weight_fn, kernel_size):
|
| 277 |
-
kernel_offsets = range(-kernel_size // 2 + 1, kernel_size // 2 + 1)
|
| 278 |
-
kernel_shape = [kernel_size, kernel_size]
|
| 279 |
-
w_shape = [_CHANNELS, _CHANNELS, kernel_size, kernel_size]
|
| 280 |
-
pads = [kernel_size // 2] * 4
|
| 281 |
-
weight_cells = itertools.product(range(_CHANNELS), range(_CHANNELS),
|
| 282 |
-
kernel_offsets, kernel_offsets)
|
| 283 |
-
weights = [weight_fn(o, i, (r, c)) for (o, i, r, c) in weight_cells]
|
| 284 |
-
|
| 285 |
-
x = onnx.helper.make_tensor_value_info("input", _DATA_TYPE, _GRID_SHAPE)
|
| 286 |
-
y = onnx.helper.make_tensor_value_info("output", _DATA_TYPE, _GRID_SHAPE)
|
| 287 |
-
w = onnx.helper.make_tensor("W", _DATA_TYPE, w_shape, weights)
|
| 288 |
-
node_def = onnx.helper.make_node("Conv", ["input", "W"], ["output"],
|
| 289 |
-
kernel_shape=kernel_shape, pads=pads)
|
| 290 |
-
graph_def = onnx.helper.make_graph([node_def], "graph", [x], [y], [w])
|
| 291 |
-
model_def = onnx.helper.make_model(graph_def, ir_version=_IR_VERSION,
|
| 292 |
-
opset_imports=_OPSET_IMPORTS)
|
| 293 |
-
return model_def
|
| 294 |
-
|
| 295 |
-
|
| 296 |
-
def verify_network(network, task_num, examples):
|
| 297 |
-
filename = "task{:03d}.onnx".format(task_num)
|
| 298 |
-
onnx.save(network, filename)
|
| 299 |
-
if not check_network(filename): return
|
| 300 |
-
try:
|
| 301 |
-
session = onnxruntime.InferenceSession(filename)
|
| 302 |
-
except onnxruntime.ONNXRuntimeError as e:
|
| 303 |
-
print(f"Error: Unable to load ONNX model: {e}")
|
| 304 |
-
return
|
| 305 |
-
arc_agi_right, arc_agi_wrong, arc_agi_expected = verify_subset(session, examples["train"] + examples["test"])
|
| 306 |
-
arc_gen_right, arc_gen_wrong, arc_gen_expected = verify_subset(session, examples["arc-gen"])
|
| 307 |
-
print(f"Results on ARC-AGI examples: {arc_agi_right} pass, {arc_agi_wrong} fail")
|
| 308 |
-
print(f"Results on ARC-GEN examples: {arc_gen_right} pass, {arc_gen_wrong} fail")
|
| 309 |
-
print()
|
| 310 |
-
macs, memory, params = score_network(filename)
|
| 311 |
-
if macs is None or memory is None or params is None:
|
| 312 |
-
print("Error: Your network performance could not be measured")
|
| 313 |
-
elif arc_agi_wrong + arc_gen_wrong == 0:
|
| 314 |
-
print("Your network IS READY for submission!")
|
| 315 |
-
print()
|
| 316 |
-
print("Performance stats:")
|
| 317 |
-
onnx_tool.model_profile(filename)
|
| 318 |
-
points = max(1.0, 25.0 - math.log(macs + memory + params))
|
| 319 |
-
print()
|
| 320 |
-
print(f"It appears to require {macs} MACs + {memory} bytes + {params} params, yielding {points:.3f} points.")
|
| 321 |
-
print()
|
| 322 |
-
print("Next steps:")
|
| 323 |
-
print(f" * Click the link below to download {filename} onto your local machine.")
|
| 324 |
-
print(" * Create a zip file containing that network along with all others.")
|
| 325 |
-
print(" * Submit that zip file to the Kaggle competition so that it can be officially scored.")
|
| 326 |
-
print()
|
| 327 |
-
display(FileLink(filename))
|
| 328 |
-
else:
|
| 329 |
-
print("Your network IS NOT ready for submission.")
|
| 330 |
-
expected = None
|
| 331 |
-
expected = arc_agi_expected if arc_agi_expected is not None else expected
|
| 332 |
-
expected = arc_gen_expected if arc_gen_expected is not None else expected
|
| 333 |
-
if expected is None: return
|
| 334 |
-
benchmark = convert_to_numpy(expected)
|
| 335 |
-
actual = {}
|
| 336 |
-
actual["input"] = expected["input"]
|
| 337 |
-
actual["output"] = convert_from_numpy(run_network(session, benchmark["input"]))
|
| 338 |
-
print("The expected result is shown in green; your actual result is shown in red.")
|
| 339 |
-
show_examples([expected], bgcolor=(200, 255, 200))
|
| 340 |
-
show_examples([actual], bgcolor=(255, 200, 200))
|
| 341 |
-
|
| 342 |
-
|
| 343 |
-
def verify_subset(session, example_subset):
|
| 344 |
-
right, wrong, expected, error = 0, 0, None, ""
|
| 345 |
-
for example in example_subset:
|
| 346 |
-
benchmark = convert_to_numpy(example)
|
| 347 |
-
if not benchmark: continue
|
| 348 |
-
try:
|
| 349 |
-
user_output = run_network(session, benchmark["input"])
|
| 350 |
-
if np.array_equal(user_output, benchmark["output"]):
|
| 351 |
-
right += 1
|
| 352 |
-
else:
|
| 353 |
-
expected = example
|
| 354 |
-
wrong += 1
|
| 355 |
-
except onnxruntime.ONNXRuntimeError:
|
| 356 |
-
error = traceback.format_exc()
|
| 357 |
-
wrong += 1
|
| 358 |
-
if error: print(f"Error: {error}")
|
| 359 |
-
return right, wrong, expected
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
own-solver/.moved
DELETED
|
@@ -1 +0,0 @@
|
|
| 1 |
-
Moved to own-solver/ directory to make room for ensemble strategy at repo root.
|
|
|
|
|
|
skill-creator.md
DELETED
|
@@ -1,485 +0,0 @@
|
|
| 1 |
-
---
|
| 2 |
-
name: skill-creator
|
| 3 |
-
description: Create new skills, modify and improve existing skills, and measure skill performance. Use when users want to create a skill from scratch, edit, or optimize an existing skill, run evals to test a skill, benchmark skill performance with variance analysis, or optimize a skill's description for better triggering accuracy.
|
| 4 |
-
---
|
| 5 |
-
|
| 6 |
-
# Skill Creator
|
| 7 |
-
|
| 8 |
-
A skill for creating new skills and iteratively improving them.
|
| 9 |
-
|
| 10 |
-
At a high level, the process of creating a skill goes like this:
|
| 11 |
-
|
| 12 |
-
- Decide what you want the skill to do and roughly how it should do it
|
| 13 |
-
- Write a draft of the skill
|
| 14 |
-
- Create a few test prompts and run claude-with-access-to-the-skill on them
|
| 15 |
-
- Help the user evaluate the results both qualitatively and quantitatively
|
| 16 |
-
- While the runs happen in the background, draft some quantitative evals if there aren't any (if there are some, you can either use as is or modify if you feel something needs to change about them). Then explain them to the user (or if they already existed, explain the ones that already exist)
|
| 17 |
-
- Use the `eval-viewer/generate_review.py` script to show the user the results for them to look at, and also let them look at the quantitative metrics
|
| 18 |
-
- Rewrite the skill based on feedback from the user's evaluation of the results (and also if there are any glaring flaws that become apparent from the quantitative benchmarks)
|
| 19 |
-
- Repeat until you're satisfied
|
| 20 |
-
- Expand the test set and try again at larger scale
|
| 21 |
-
|
| 22 |
-
Your job when using this skill is to figure out where the user is in this process and then jump in and help them progress through these stages. So for instance, maybe they're like "I want to make a skill for X". You can help narrow down what they mean, write a draft, write the test cases, figure out how they want to evaluate, run all the prompts, and repeat.
|
| 23 |
-
|
| 24 |
-
On the other hand, maybe they already have a draft of the skill. In this case you can go straight to the eval/iterate part of the loop.
|
| 25 |
-
|
| 26 |
-
Of course, you should always be flexible and if the user is like "I don't need to run a bunch of evaluations, just vibe with me", you can do that instead.
|
| 27 |
-
|
| 28 |
-
Then after the skill is done (but again, the order is flexible), you can also run the skill description improver, which we have a whole separate script for, to optimize the triggering of the skill.
|
| 29 |
-
|
| 30 |
-
Cool? Cool.
|
| 31 |
-
|
| 32 |
-
## Communicating with the user
|
| 33 |
-
|
| 34 |
-
The skill creator is liable to be used by people across a wide range of familiarity with coding jargon. If you haven't heard (and how could you, it's only very recently that it started), there's a trend now where the power of Claude is inspiring plumbers to open up their terminals, parents and grandparents to google "how to install npm". On the other hand, the bulk of users are probably fairly computer-literate.
|
| 35 |
-
|
| 36 |
-
So please pay attention to context cues to understand how to phrase your communication! In the default case, just to give you some idea:
|
| 37 |
-
|
| 38 |
-
- "evaluation" and "benchmark" are borderline, but OK
|
| 39 |
-
- for "JSON" and "assertion" you want to see serious cues from the user that they know what those things are before using them without explaining them
|
| 40 |
-
|
| 41 |
-
It's OK to briefly explain terms if you're in doubt, and feel free to clarify terms with a short definition if you're unsure if the user will get it.
|
| 42 |
-
|
| 43 |
-
---
|
| 44 |
-
|
| 45 |
-
## Creating a skill
|
| 46 |
-
|
| 47 |
-
### Capture Intent
|
| 48 |
-
|
| 49 |
-
Start by understanding the user's intent. The current conversation might already contain a workflow the user wants to capture (e.g., they say "turn this into a skill"). If so, extract answers from the conversation history first — the tools used, the sequence of steps, corrections the user made, input/output formats observed. The user may need to fill the gaps, and should confirm before proceeding to the next step.
|
| 50 |
-
|
| 51 |
-
1. What should this skill enable Claude to do?
|
| 52 |
-
2. When should this skill trigger? (what user phrases/contexts)
|
| 53 |
-
3. What's the expected output format?
|
| 54 |
-
4. Should we set up test cases to verify the skill works? Skills with objectively verifiable outputs (file transforms, data extraction, code generation, fixed workflow steps) benefit from test cases. Skills with subjective outputs (writing style, art) often don't need them. Suggest the appropriate default based on the skill type, but let the user decide.
|
| 55 |
-
|
| 56 |
-
### Interview and Research
|
| 57 |
-
|
| 58 |
-
Proactively ask questions about edge cases, input/output formats, example files, success criteria, and dependencies. Wait to write test prompts until you've got this part ironed out.
|
| 59 |
-
|
| 60 |
-
Check available MCPs - if useful for research (searching docs, finding similar skills, looking up best practices), research in parallel via subagents if available, otherwise inline. Come prepared with context to reduce burden on the user.
|
| 61 |
-
|
| 62 |
-
### Write the SKILL.md
|
| 63 |
-
|
| 64 |
-
Based on the user interview, fill in these components:
|
| 65 |
-
|
| 66 |
-
- **name**: Skill identifier
|
| 67 |
-
- **description**: When to trigger, what it does. This is the primary triggering mechanism - include both what the skill does AND specific contexts for when to use it. All "when to use" info goes here, not in the body. Note: currently Claude has a tendency to "undertrigger" skills -- to not use them when they'd be useful. To combat this, please make the skill descriptions a little bit "pushy". So for instance, instead of "How to build a simple fast dashboard to display internal Anthropic data.", you might write "How to build a simple fast dashboard to display internal Anthropic data. Make sure to use this skill whenever the user mentions dashboards, data visualization, internal metrics, or wants to display any kind of company data, even if they don't explicitly ask for a 'dashboard.'"
|
| 68 |
-
- **compatibility**: Required tools, dependencies (optional, rarely needed)
|
| 69 |
-
- **the rest of the skill :)**
|
| 70 |
-
|
| 71 |
-
### Skill Writing Guide
|
| 72 |
-
|
| 73 |
-
#### Anatomy of a Skill
|
| 74 |
-
|
| 75 |
-
```
|
| 76 |
-
skill-name/
|
| 77 |
-
├── SKILL.md (required)
|
| 78 |
-
│ ├── YAML frontmatter (name, description required)
|
| 79 |
-
│ └── Markdown instructions
|
| 80 |
-
└── Bundled Resources (optional)
|
| 81 |
-
├── scripts/ - Executable code for deterministic/repetitive tasks
|
| 82 |
-
├── references/ - Docs loaded into context as needed
|
| 83 |
-
└── assets/ - Files used in output (templates, icons, fonts)
|
| 84 |
-
```
|
| 85 |
-
|
| 86 |
-
#### Progressive Disclosure
|
| 87 |
-
|
| 88 |
-
Skills use a three-level loading system:
|
| 89 |
-
1. **Metadata** (name + description) - Always in context (~100 words)
|
| 90 |
-
2. **SKILL.md body** - In context whenever skill triggers (<500 lines ideal)
|
| 91 |
-
3. **Bundled resources** - As needed (unlimited, scripts can execute without loading)
|
| 92 |
-
|
| 93 |
-
These word counts are approximate and you can feel free to go longer if needed.
|
| 94 |
-
|
| 95 |
-
**Key patterns:**
|
| 96 |
-
- Keep SKILL.md under 500 lines; if you're approaching this limit, add an additional layer of hierarchy along with clear pointers about where the model using the skill should go next to follow up.
|
| 97 |
-
- Reference files clearly from SKILL.md with guidance on when to read them
|
| 98 |
-
- For large reference files (>300 lines), include a table of contents
|
| 99 |
-
|
| 100 |
-
**Domain organization**: When a skill supports multiple domains/frameworks, organize by variant:
|
| 101 |
-
```
|
| 102 |
-
cloud-deploy/
|
| 103 |
-
├── SKILL.md (workflow + selection)
|
| 104 |
-
└── references/
|
| 105 |
-
├── aws.md
|
| 106 |
-
├── gcp.md
|
| 107 |
-
└── azure.md
|
| 108 |
-
```
|
| 109 |
-
Claude reads only the relevant reference file.
|
| 110 |
-
|
| 111 |
-
#### Principle of Lack of Surprise
|
| 112 |
-
|
| 113 |
-
This goes without saying, but skills must not contain malware, exploit code, or any content that could compromise system security. A skill's contents should not surprise the user in their intent if described. Don't go along with requests to create misleading skills or skills designed to facilitate unauthorized access, data exfiltration, or other malicious activities. Things like a "roleplay as an XYZ" are OK though.
|
| 114 |
-
|
| 115 |
-
#### Writing Patterns
|
| 116 |
-
|
| 117 |
-
Prefer using the imperative form in instructions.
|
| 118 |
-
|
| 119 |
-
**Defining output formats** - You can do it like this:
|
| 120 |
-
```markdown
|
| 121 |
-
## Report structure
|
| 122 |
-
ALWAYS use this exact template:
|
| 123 |
-
# [Title]
|
| 124 |
-
## Executive summary
|
| 125 |
-
## Key findings
|
| 126 |
-
## Recommendations
|
| 127 |
-
```
|
| 128 |
-
|
| 129 |
-
**Examples pattern** - It's useful to include examples. You can format them like this (but if "Input" and "Output" are in the examples you might want to deviate a little):
|
| 130 |
-
```markdown
|
| 131 |
-
## Commit message format
|
| 132 |
-
**Example 1:**
|
| 133 |
-
Input: Added user authentication with JWT tokens
|
| 134 |
-
Output: feat(auth): implement JWT-based authentication
|
| 135 |
-
```
|
| 136 |
-
|
| 137 |
-
### Writing Style
|
| 138 |
-
|
| 139 |
-
Try to explain to the model why things are important in lieu of heavy-handed musty MUSTs. Use theory of mind and try to make the skill general and not super-narrow to specific examples. Start by writing a draft and then look at it with fresh eyes and improve it.
|
| 140 |
-
|
| 141 |
-
### Test Cases
|
| 142 |
-
|
| 143 |
-
After writing the skill draft, come up with 2-3 realistic test prompts — the kind of thing a real user would actually say. Share them with the user: [you don't have to use this exact language] "Here are a few test cases I'd like to try. Do these look right, or do you want to add more?" Then run them.
|
| 144 |
-
|
| 145 |
-
Save test cases to `evals/evals.json`. Don't write assertions yet — just the prompts. You'll draft assertions in the next step while the runs are in progress.
|
| 146 |
-
|
| 147 |
-
```json
|
| 148 |
-
{
|
| 149 |
-
"skill_name": "example-skill",
|
| 150 |
-
"evals": [
|
| 151 |
-
{
|
| 152 |
-
"id": 1,
|
| 153 |
-
"prompt": "User's task prompt",
|
| 154 |
-
"expected_output": "Description of expected result",
|
| 155 |
-
"files": []
|
| 156 |
-
}
|
| 157 |
-
]
|
| 158 |
-
}
|
| 159 |
-
```
|
| 160 |
-
|
| 161 |
-
See `references/schemas.md` for the full schema (including the `assertions` field, which you'll add later).
|
| 162 |
-
|
| 163 |
-
## Running and evaluating test cases
|
| 164 |
-
|
| 165 |
-
This section is one continuous sequence — don't stop partway through. Do NOT use `/skill-test` or any other testing skill.
|
| 166 |
-
|
| 167 |
-
Put results in `<skill-name>-workspace/` as a sibling to the skill directory. Within the workspace, organize results by iteration (`iteration-1/`, `iteration-2/`, etc.) and within that, each test case gets a directory (`eval-0/`, `eval-1/`, etc.). Don't create all of this upfront — just create directories as you go.
|
| 168 |
-
|
| 169 |
-
### Step 1: Spawn all runs (with-skill AND baseline) in the same turn
|
| 170 |
-
|
| 171 |
-
For each test case, spawn two subagents in the same turn — one with the skill, one without. This is important: don't spawn the with-skill runs first and then come back for baselines later. Launch everything at once so it all finishes around the same time.
|
| 172 |
-
|
| 173 |
-
**With-skill run:**
|
| 174 |
-
|
| 175 |
-
```
|
| 176 |
-
Execute this task:
|
| 177 |
-
- Skill path: <path-to-skill>
|
| 178 |
-
- Task: <eval prompt>
|
| 179 |
-
- Input files: <eval files if any, or "none">
|
| 180 |
-
- Save outputs to: <workspace>/iteration-<N>/eval-<ID>/with_skill/outputs/
|
| 181 |
-
- Outputs to save: <what the user cares about — e.g., "the .docx file", "the final CSV">
|
| 182 |
-
```
|
| 183 |
-
|
| 184 |
-
**Baseline run** (same prompt, but the baseline depends on context):
|
| 185 |
-
- **Creating a new skill**: no skill at all. Same prompt, no skill path, save to `without_skill/outputs/`.
|
| 186 |
-
- **Improving an existing skill**: the old version. Before editing, snapshot the skill (`cp -r <skill-path> <workspace>/skill-snapshot/`), then point the baseline subagent at the snapshot. Save to `old_skill/outputs/`.
|
| 187 |
-
|
| 188 |
-
Write an `eval_metadata.json` for each test case (assertions can be empty for now). Give each eval a descriptive name based on what it's testing — not just "eval-0". Use this name for the directory too. If this iteration uses new or modified eval prompts, create these files for each new eval directory — don't assume they carry over from previous iterations.
|
| 189 |
-
|
| 190 |
-
```json
|
| 191 |
-
{
|
| 192 |
-
"eval_id": 0,
|
| 193 |
-
"eval_name": "descriptive-name-here",
|
| 194 |
-
"prompt": "The user's task prompt",
|
| 195 |
-
"assertions": []
|
| 196 |
-
}
|
| 197 |
-
```
|
| 198 |
-
|
| 199 |
-
### Step 2: While runs are in progress, draft assertions
|
| 200 |
-
|
| 201 |
-
Don't just wait for the runs to finish — you can use this time productively. Draft quantitative assertions for each test case and explain them to the user. If assertions already exist in `evals/evals.json`, review them and explain what they check.
|
| 202 |
-
|
| 203 |
-
Good assertions are objectively verifiable and have descriptive names — they should read clearly in the benchmark viewer so someone glancing at the results immediately understands what each one checks. Subjective skills (writing style, design quality) are better evaluated qualitatively — don't force assertions onto things that need human judgment.
|
| 204 |
-
|
| 205 |
-
Update the `eval_metadata.json` files and `evals/evals.json` with the assertions once drafted. Also explain to the user what they'll see in the viewer — both the qualitative outputs and the quantitative benchmark.
|
| 206 |
-
|
| 207 |
-
### Step 3: As runs complete, capture timing data
|
| 208 |
-
|
| 209 |
-
When each subagent task completes, you receive a notification containing `total_tokens` and `duration_ms`. Save this data immediately to `timing.json` in the run directory:
|
| 210 |
-
|
| 211 |
-
```json
|
| 212 |
-
{
|
| 213 |
-
"total_tokens": 84852,
|
| 214 |
-
"duration_ms": 23332,
|
| 215 |
-
"total_duration_seconds": 23.3
|
| 216 |
-
}
|
| 217 |
-
```
|
| 218 |
-
|
| 219 |
-
This is the only opportunity to capture this data — it comes through the task notification and isn't persisted elsewhere. Process each notification as it arrives rather than trying to batch them.
|
| 220 |
-
|
| 221 |
-
### Step 4: Grade, aggregate, and launch the viewer
|
| 222 |
-
|
| 223 |
-
Once all runs are done:
|
| 224 |
-
|
| 225 |
-
1. **Grade each run** — spawn a grader subagent (or grade inline) that reads `agents/grader.md` and evaluates each assertion against the outputs. Save results to `grading.json` in each run directory. The grading.json expectations array must use the fields `text`, `passed`, and `evidence` (not `name`/`met`/`details` or other variants) — the viewer depends on these exact field names. For assertions that can be checked programmatically, write and run a script rather than eyeballing it — scripts are faster, more reliable, and can be reused across iterations.
|
| 226 |
-
|
| 227 |
-
2. **Aggregate into benchmark** — run the aggregation script from the skill-creator directory:
|
| 228 |
-
```bash
|
| 229 |
-
python -m scripts.aggregate_benchmark <workspace>/iteration-N --skill-name <name>
|
| 230 |
-
```
|
| 231 |
-
This produces `benchmark.json` and `benchmark.md` with pass_rate, time, and tokens for each configuration, with mean ± stddev and the delta. If generating benchmark.json manually, see `references/schemas.md` for the exact schema the viewer expects.
|
| 232 |
-
Put each with_skill version before its baseline counterpart.
|
| 233 |
-
|
| 234 |
-
3. **Do an analyst pass** — read the benchmark data and surface patterns the aggregate stats might hide. See `agents/analyzer.md` (the "Analyzing Benchmark Results" section) for what to look for — things like assertions that always pass regardless of skill (non-discriminating), high-variance evals (possibly flaky), and time/token tradeoffs.
|
| 235 |
-
|
| 236 |
-
4. **Launch the viewer** with both qualitative outputs and quantitative data:
|
| 237 |
-
```bash
|
| 238 |
-
nohup python <skill-creator-path>/eval-viewer/generate_review.py \
|
| 239 |
-
<workspace>/iteration-N \
|
| 240 |
-
--skill-name "my-skill" \
|
| 241 |
-
--benchmark <workspace>/iteration-N/benchmark.json \
|
| 242 |
-
> /dev/null 2>&1 &
|
| 243 |
-
VIEWER_PID=$!
|
| 244 |
-
```
|
| 245 |
-
For iteration 2+, also pass `--previous-workspace <workspace>/iteration-<N-1>`.
|
| 246 |
-
|
| 247 |
-
**Cowork / headless environments:** If `webbrowser.open()` is not available or the environment has no display, use `--static <output_path>` to write a standalone HTML file instead of starting a server. Feedback will be downloaded as a `feedback.json` file when the user clicks "Submit All Reviews". After download, copy `feedback.json` into the workspace directory for the next iteration to pick up.
|
| 248 |
-
|
| 249 |
-
Note: please use generate_review.py to create the viewer; there's no need to write custom HTML.
|
| 250 |
-
|
| 251 |
-
5. **Tell the user** something like: "I've opened the results in your browser. There are two tabs — 'Outputs' lets you click through each test case and leave feedback, 'Benchmark' shows the quantitative comparison. When you're done, come back here and let me know."
|
| 252 |
-
|
| 253 |
-
### What the user sees in the viewer
|
| 254 |
-
|
| 255 |
-
The "Outputs" tab shows one test case at a time:
|
| 256 |
-
- **Prompt**: the task that was given
|
| 257 |
-
- **Output**: the files the skill produced, rendered inline where possible
|
| 258 |
-
- **Previous Output** (iteration 2+): collapsed section showing last iteration's output
|
| 259 |
-
- **Formal Grades** (if grading was run): collapsed section showing assertion pass/fail
|
| 260 |
-
- **Feedback**: a textbox that auto-saves as they type
|
| 261 |
-
- **Previous Feedback** (iteration 2+): their comments from last time, shown below the textbox
|
| 262 |
-
|
| 263 |
-
The "Benchmark" tab shows the stats summary: pass rates, timing, and token usage for each configuration, with per-eval breakdowns and analyst observations.
|
| 264 |
-
|
| 265 |
-
Navigation is via prev/next buttons or arrow keys. When done, they click "Submit All Reviews" which saves all feedback to `feedback.json`.
|
| 266 |
-
|
| 267 |
-
### Step 5: Read the feedback
|
| 268 |
-
|
| 269 |
-
When the user tells you they're done, read `feedback.json`:
|
| 270 |
-
|
| 271 |
-
```json
|
| 272 |
-
{
|
| 273 |
-
"reviews": [
|
| 274 |
-
{"run_id": "eval-0-with_skill", "feedback": "the chart is missing axis labels", "timestamp": "..."},
|
| 275 |
-
{"run_id": "eval-1-with_skill", "feedback": "", "timestamp": "..."},
|
| 276 |
-
{"run_id": "eval-2-with_skill", "feedback": "perfect, love this", "timestamp": "..."}
|
| 277 |
-
],
|
| 278 |
-
"status": "complete"
|
| 279 |
-
}
|
| 280 |
-
```
|
| 281 |
-
|
| 282 |
-
Empty feedback means the user thought it was fine. Focus your improvements on the test cases where the user had specific complaints.
|
| 283 |
-
|
| 284 |
-
Kill the viewer server when you're done with it:
|
| 285 |
-
|
| 286 |
-
```bash
|
| 287 |
-
kill $VIEWER_PID 2>/dev/null
|
| 288 |
-
```
|
| 289 |
-
|
| 290 |
-
---
|
| 291 |
-
|
| 292 |
-
## Improving the skill
|
| 293 |
-
|
| 294 |
-
This is the heart of the loop. You've run the test cases, the user has reviewed the results, and now you need to make the skill better based on their feedback.
|
| 295 |
-
|
| 296 |
-
### How to think about improvements
|
| 297 |
-
|
| 298 |
-
1. **Generalize from the feedback.** The big picture thing that's happening here is that we're trying to create skills that can be used a million times (maybe literally, maybe even more who knows) across many different prompts. Here you and the user are iterating on only a few examples over and over again because it helps move faster. The user knows these examples in and out and it's quick for them to assess new outputs. But if the skill you and the user are codeveloping works only for those examples, it's useless. Rather than put in fiddly overfitty changes, or oppressively constrictive MUSTs, if there's some stubborn issue, you might try branching out and using different metaphors, or recommending different patterns of working. It's relatively cheap to try and maybe you'll land on something great.
|
| 299 |
-
|
| 300 |
-
2. **Keep the prompt lean.** Remove things that aren't pulling their weight. Make sure to read the transcripts, not just the final outputs — if it looks like the skill is making the model waste a bunch of time doing things that are unproductive, you can try getting rid of the parts of the skill that are making it do that and seeing what happens.
|
| 301 |
-
|
| 302 |
-
3. **Explain the why.** Try hard to explain the **why** behind everything you're asking the model to do. Today's LLMs are *smart*. They have good theory of mind and when given a good harness can go beyond rote instructions and really make things happen. Even if the feedback from the user is terse or frustrated, try to actually understand the task and why the user is writing what they wrote, and what they actually wrote, and then transmit this understanding into the instructions. If you find yourself writing ALWAYS or NEVER in all caps, or using super rigid structures, that's a yellow flag — if possible, reframe and explain the reasoning so that the model understands why the thing you're asking for is important. That's a more humane, powerful, and effective approach.
|
| 303 |
-
|
| 304 |
-
4. **Look for repeated work across test cases.** Read the transcripts from the test runs and notice if the subagents all independently wrote similar helper scripts or took the same multi-step approach to something. If all 3 test cases resulted in the subagent writing a `create_docx.py` or a `build_chart.py`, that's a strong signal the skill should bundle that script. Write it once, put it in `scripts/`, and tell the skill to use it. This saves every future invocation from reinventing the wheel.
|
| 305 |
-
|
| 306 |
-
This task is pretty important (we are trying to create billions a year in economic value here!) and your thinking time is not the blocker; take your time and really mull things over. I'd suggest writing a draft revision and then looking at it anew and making improvements. Really do your best to get into the head of the user and understand what they want and need.
|
| 307 |
-
|
| 308 |
-
### The iteration loop
|
| 309 |
-
|
| 310 |
-
After improving the skill:
|
| 311 |
-
|
| 312 |
-
1. Apply your improvements to the skill
|
| 313 |
-
2. Rerun all test cases into a new `iteration-<N+1>/` directory, including baseline runs. If you're creating a new skill, the baseline is always `without_skill` (no skill) — that stays the same across iterations. If you're improving an existing skill, use your judgment on what makes sense as the baseline: the original version the user came in with, or the previous iteration.
|
| 314 |
-
3. Launch the reviewer with `--previous-workspace` pointing at the previous iteration
|
| 315 |
-
4. Wait for the user to review and tell you they're done
|
| 316 |
-
5. Read the new feedback, improve again, repeat
|
| 317 |
-
|
| 318 |
-
Keep going until:
|
| 319 |
-
- The user says they're happy
|
| 320 |
-
- The feedback is all empty (everything looks good)
|
| 321 |
-
- You're not making meaningful progress
|
| 322 |
-
|
| 323 |
-
---
|
| 324 |
-
|
| 325 |
-
## Advanced: Blind comparison
|
| 326 |
-
|
| 327 |
-
For situations where you want a more rigorous comparison between two versions of a skill (e.g., the user asks "is the new version actually better?"), there's a blind comparison system. Read `agents/comparator.md` and `agents/analyzer.md` for the details. The basic idea is: give two outputs to an independent agent without telling it which is which, and let it judge quality. Then analyze why the winner won.
|
| 328 |
-
|
| 329 |
-
This is optional, requires subagents, and most users won't need it. The human review loop is usually sufficient.
|
| 330 |
-
|
| 331 |
-
---
|
| 332 |
-
|
| 333 |
-
## Description Optimization
|
| 334 |
-
|
| 335 |
-
The description field in SKILL.md frontmatter is the primary mechanism that determines whether Claude invokes a skill. After creating or improving a skill, offer to optimize the description for better triggering accuracy.
|
| 336 |
-
|
| 337 |
-
### Step 1: Generate trigger eval queries
|
| 338 |
-
|
| 339 |
-
Create 20 eval queries — a mix of should-trigger and should-not-trigger. Save as JSON:
|
| 340 |
-
|
| 341 |
-
```json
|
| 342 |
-
[
|
| 343 |
-
{"query": "the user prompt", "should_trigger": true},
|
| 344 |
-
{"query": "another prompt", "should_trigger": false}
|
| 345 |
-
]
|
| 346 |
-
```
|
| 347 |
-
|
| 348 |
-
The queries must be realistic and something a Claude Code or Claude.ai user would actually type. Not abstract requests, but requests that are concrete and specific and have a good amount of detail. For instance, file paths, personal context about the user's job or situation, column names and values, company names, URLs. A little bit of backstory. Some might be in lowercase or contain abbreviations or typos or casual speech. Use a mix of different lengths, and focus on edge cases rather than making them clear-cut (the user will get a chance to sign off on them).
|
| 349 |
-
|
| 350 |
-
Bad: `"Format this data"`, `"Extract text from PDF"`, `"Create a chart"`
|
| 351 |
-
|
| 352 |
-
Good: `"ok so my boss just sent me this xlsx file (its in my downloads, called something like 'Q4 sales final FINAL v2.xlsx') and she wants me to add a column that shows the profit margin as a percentage. The revenue is in column C and costs are in column D i think"`
|
| 353 |
-
|
| 354 |
-
For the **should-trigger** queries (8-10), think about coverage. You want different phrasings of the same intent — some formal, some casual. Include cases where the user doesn't explicitly name the skill or file type but clearly needs it. Throw in some uncommon use cases and cases where this skill competes with another but should win.
|
| 355 |
-
|
| 356 |
-
For the **should-not-trigger** queries (8-10), the most valuable ones are the near-misses — queries that share keywords or concepts with the skill but actually need something different. Think adjacent domains, ambiguous phrasing where a naive keyword match would trigger but shouldn't, and cases where the query touches on something the skill does but in a context where another tool is more appropriate.
|
| 357 |
-
|
| 358 |
-
The key thing to avoid: don't make should-not-trigger queries obviously irrelevant. "Write a fibonacci function" as a negative test for a PDF skill is too easy — it doesn't test anything. The negative cases should be genuinely tricky.
|
| 359 |
-
|
| 360 |
-
### Step 2: Review with user
|
| 361 |
-
|
| 362 |
-
Present the eval set to the user for review using the HTML template:
|
| 363 |
-
|
| 364 |
-
1. Read the template from `assets/eval_review.html`
|
| 365 |
-
2. Replace the placeholders:
|
| 366 |
-
- `__EVAL_DATA_PLACEHOLDER__` → the JSON array of eval items (no quotes around it — it's a JS variable assignment)
|
| 367 |
-
- `__SKILL_NAME_PLACEHOLDER__` → the skill's name
|
| 368 |
-
- `__SKILL_DESCRIPTION_PLACEHOLDER__` → the skill's current description
|
| 369 |
-
3. Write to a temp file (e.g., `/tmp/eval_review_<skill-name>.html`) and open it: `open /tmp/eval_review_<skill-name>.html`
|
| 370 |
-
4. The user can edit queries, toggle should-trigger, add/remove entries, then click "Export Eval Set"
|
| 371 |
-
5. The file downloads to `~/Downloads/eval_set.json` — check the Downloads folder for the most recent version in case there are multiple (e.g., `eval_set (1).json`)
|
| 372 |
-
|
| 373 |
-
This step matters — bad eval queries lead to bad descriptions.
|
| 374 |
-
|
| 375 |
-
### Step 3: Run the optimization loop
|
| 376 |
-
|
| 377 |
-
Tell the user: "This will take some time — I'll run the optimization loop in the background and check on it periodically."
|
| 378 |
-
|
| 379 |
-
Save the eval set to the workspace, then run in the background:
|
| 380 |
-
|
| 381 |
-
```bash
|
| 382 |
-
python -m scripts.run_loop \
|
| 383 |
-
--eval-set <path-to-trigger-eval.json> \
|
| 384 |
-
--skill-path <path-to-skill> \
|
| 385 |
-
--model <model-id-powering-this-session> \
|
| 386 |
-
--max-iterations 5 \
|
| 387 |
-
--verbose
|
| 388 |
-
```
|
| 389 |
-
|
| 390 |
-
Use the model ID from your system prompt (the one powering the current session) so the triggering test matches what the user actually experiences.
|
| 391 |
-
|
| 392 |
-
While it runs, periodically tail the output to give the user updates on which iteration it's on and what the scores look like.
|
| 393 |
-
|
| 394 |
-
This handles the full optimization loop automatically. It splits the eval set into 60% train and 40% held-out test, evaluates the current description (running each query 3 times to get a reliable trigger rate), then calls Claude to propose improvements based on what failed. It re-evaluates each new description on both train and test, iterating up to 5 times. When it's done, it opens an HTML report in the browser showing the results per iteration and returns JSON with `best_description` — selected by test score rather than train score to avoid overfitting.
|
| 395 |
-
|
| 396 |
-
### How skill triggering works
|
| 397 |
-
|
| 398 |
-
Understanding the triggering mechanism helps design better eval queries. Skills appear in Claude's `available_skills` list with their name + description, and Claude decides whether to consult a skill based on that description. The important thing to know is that Claude only consults skills for tasks it can't easily handle on its own — simple, one-step queries like "read this PDF" may not trigger a skill even if the description matches perfectly, because Claude can handle them directly with basic tools. Complex, multi-step, or specialized queries reliably trigger skills when the description matches.
|
| 399 |
-
|
| 400 |
-
This means your eval queries should be substantive enough that Claude would actually benefit from consulting a skill. Simple queries like "read file X" are poor test cases — they won't trigger skills regardless of description quality.
|
| 401 |
-
|
| 402 |
-
### Step 4: Apply the result
|
| 403 |
-
|
| 404 |
-
Take `best_description` from the JSON output and update the skill's SKILL.md frontmatter. Show the user before/after and report the scores.
|
| 405 |
-
|
| 406 |
-
---
|
| 407 |
-
|
| 408 |
-
### Package and Present (only if `present_files` tool is available)
|
| 409 |
-
|
| 410 |
-
Check whether you have access to the `present_files` tool. If you don't, skip this step. If you do, package the skill and present the .skill file to the user:
|
| 411 |
-
|
| 412 |
-
```bash
|
| 413 |
-
python -m scripts.package_skill <path/to/skill-folder>
|
| 414 |
-
```
|
| 415 |
-
|
| 416 |
-
After packaging, direct the user to the resulting `.skill` file path so they can install it.
|
| 417 |
-
|
| 418 |
-
---
|
| 419 |
-
|
| 420 |
-
## Claude.ai-specific instructions
|
| 421 |
-
|
| 422 |
-
In Claude.ai, the core workflow is the same (draft → test → review → improve → repeat), but because Claude.ai doesn't have subagents, some mechanics change. Here's what to adapt:
|
| 423 |
-
|
| 424 |
-
**Running test cases**: No subagents means no parallel execution. For each test case, read the skill's SKILL.md, then follow its instructions to accomplish the test prompt yourself. Do them one at a time. This is less rigorous than independent subagents (you wrote the skill and you're also running it, so you have full context), but it's a useful sanity check — and the human review step compensates. Skip the baseline runs — just use the skill to complete the task as requested.
|
| 425 |
-
|
| 426 |
-
**Reviewing results**: If you can't open a browser (e.g., Claude.ai's VM has no display, or you're on a remote server), skip the browser reviewer entirely. Instead, present results directly in the conversation. For each test case, show the prompt and the output. If the output is a file the user needs to see (like a .docx or .xlsx), save it to the filesystem and tell them where it is so they can download and inspect it. Ask for feedback inline: "How does this look? Anything you'd change?"
|
| 427 |
-
|
| 428 |
-
**Benchmarking**: Skip the quantitative benchmarking — it relies on baseline comparisons which aren't meaningful without subagents. Focus on qualitative feedback from the user.
|
| 429 |
-
|
| 430 |
-
**The iteration loop**: Same as before — improve the skill, rerun the test cases, ask for feedback — just without the browser reviewer in the middle. You can still organize results into iteration directories on the filesystem if you have one.
|
| 431 |
-
|
| 432 |
-
**Description optimization**: This section requires the `claude` CLI tool (specifically `claude -p`) which is only available in Claude Code. Skip it if you're on Claude.ai.
|
| 433 |
-
|
| 434 |
-
**Blind comparison**: Requires subagents. Skip it.
|
| 435 |
-
|
| 436 |
-
**Packaging**: The `package_skill.py` script works anywhere with Python and a filesystem. On Claude.ai, you can run it and the user can download the resulting `.skill` file.
|
| 437 |
-
|
| 438 |
-
**Updating an existing skill**: The user might be asking you to update an existing skill, not create a new one. In this case:
|
| 439 |
-
- **Preserve the original name.** Note the skill's directory name and `name` frontmatter field -- use them unchanged. E.g., if the installed skill is `research-helper`, output `research-helper.skill` (not `research-helper-v2`).
|
| 440 |
-
- **Copy to a writeable location before editing.** The installed skill path may be read-only. Copy to `/tmp/skill-name/`, edit there, and package from the copy.
|
| 441 |
-
- **If packaging manually, stage in `/tmp/` first**, then copy to the output directory -- direct writes may fail due to permissions.
|
| 442 |
-
|
| 443 |
-
---
|
| 444 |
-
|
| 445 |
-
## Cowork-Specific Instructions
|
| 446 |
-
|
| 447 |
-
If you're in Cowork, the main things to know are:
|
| 448 |
-
|
| 449 |
-
- You have subagents, so the main workflow (spawn test cases in parallel, run baselines, grade, etc.) all works. (However, if you run into severe problems with timeouts, it's OK to run the test prompts in series rather than parallel.)
|
| 450 |
-
- You don't have a browser or display, so when generating the eval viewer, use `--static <output_path>` to write a standalone HTML file instead of starting a server. Then proffer a link that the user can click to open the HTML in their browser.
|
| 451 |
-
- For whatever reason, the Cowork setup seems to disincline Claude from generating the eval viewer after running the tests, so just to reiterate: whether you're in Cowork or in Claude Code, after running tests, you should always generate the eval viewer for the human to look at examples before revising the skill yourself and trying to make corrections, using `generate_review.py` (not writing your own boutique html code). Sorry in advance but I'm gonna go all caps here: GENERATE THE EVAL VIEWER *BEFORE* evaluating inputs yourself. You want to get them in front of the human ASAP!
|
| 452 |
-
- Feedback works differently: since there's no running server, the viewer's "Submit All Reviews" button will download `feedback.json` as a file. You can then read it from there (you may have to request access first).
|
| 453 |
-
- Packaging works — `package_skill.py` just needs Python and a filesystem.
|
| 454 |
-
- Description optimization (`run_loop.py` / `run_eval.py`) should work in Cowork just fine since it uses `claude -p` via subprocess, not a browser, but please save it until you've fully finished making the skill and the user agrees it's in good shape.
|
| 455 |
-
- **Updating an existing skill**: The user might be asking you to update an existing skill, not create a new one. Follow the update guidance in the claude.ai section above.
|
| 456 |
-
|
| 457 |
-
---
|
| 458 |
-
|
| 459 |
-
## Reference files
|
| 460 |
-
|
| 461 |
-
The agents/ directory contains instructions for specialized subagents. Read them when you need to spawn the relevant subagent.
|
| 462 |
-
|
| 463 |
-
- `agents/grader.md` — How to evaluate assertions against outputs
|
| 464 |
-
- `agents/comparator.md` — How to do blind A/B comparison between two outputs
|
| 465 |
-
- `agents/analyzer.md` — How to analyze why one version beat another
|
| 466 |
-
|
| 467 |
-
The references/ directory has additional documentation:
|
| 468 |
-
- `references/schemas.md` — JSON structures for evals.json, grading.json, etc.
|
| 469 |
-
|
| 470 |
-
---
|
| 471 |
-
|
| 472 |
-
Repeating one more time the core loop here for emphasis:
|
| 473 |
-
|
| 474 |
-
- Figure out what the skill is about
|
| 475 |
-
- Draft or edit the skill
|
| 476 |
-
- Run claude-with-access-to-the-skill on test prompts
|
| 477 |
-
- With the user, evaluate the outputs:
|
| 478 |
-
- Create benchmark.json and run `eval-viewer/generate_review.py` to help the user review them
|
| 479 |
-
- Run quantitative evals
|
| 480 |
-
- Repeat until you and the user are satisfied
|
| 481 |
-
- Package the final skill and return it to the user.
|
| 482 |
-
|
| 483 |
-
Please add steps to your TodoList, if you have such a thing, to make sure you don't forget. If you're in Cowork, please specifically put "Create evals JSON and run `eval-viewer/generate_review.py` so human can review test cases" in your TodoList to make sure it happens.
|
| 484 |
-
|
| 485 |
-
Good luck!
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|