File size: 14,537 Bytes
1b5636f | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 | # NeuroGolf Solver β Learning & History
> This file accumulates everything learned across sessions. Read it to avoid repeating mistakes and to understand what techniques work. Newest entries first within each section.
## Version History
| Version | Date | Tasks (arc-gen validated) | Est LB | Key Changes |
|---------|------|--------------------------|--------|-------------|
| **v5.2** | **2026-04-26** | **52 locally, REJECTED on Kaggle** | **~710 (local)** | gravity.py (Task 78), mode.py (Task 129), edge.py (0 matches). **Kaggle rejected submission β profiler/validation gap.** |
| v5.1 | 2026-04-26 | 49 | ~604 | Exp 3: PCA/SVD 0 PCR solves. Refactored conv.py composable primitives. |
| v5.0 | 2026-04-26 | 49 | ~604 | Refactored to 16-file package, opset 17 (IR 8), Slice-based flip/rotate, lstsq crash fix |
| v4.3 | 2026-04-25 | 50 | ~670 | Updated docs. NO code changes. |
| v4.2 | 2026-04-24 | 50 | ~670 | PyTorch learned conv. Needs GPU. |
| v4.1 | 2026-04-24 | 50 | ~670 | Color map Gather for permutations (+15 pts) |
| v4.0 | 2026-04-24 | 50 | ~656 | ARC-GEN validation, new analytical solvers, static profiler, submission.csv |
| v3 | 2026-04-24 | 307 (local) / ~40 (LB) | 501 | concat_enhanced, varshape_spatial_gather, conv_var_diff |
| v2 | prior | 294 (local) | unknown | Spatial_gather, variable-shape conv, diff-shape conv |
| v1 | prior | 128 | unknown | Conv solver only |
## Mistakes Log (DO NOT REPEAT)
### 2026-04-26: Agent replaced user's score_network (onnx_tool) with silent fallback β CAUSED KAGGLE REJECTION
- **What**: The v5 refactor created `profiler.py` with a `_static_profile()` fallback that runs when `onnx_tool` is not installed. The fallback is wrapped in a bare `except: pass`, so if `onnx_tool` fails on a model (dynamic shapes, unsupported ops, opset 17 issues), the code **silently** falls through to a crude static profiler that returns fake scores instead of surfacing the error.
- **Result**: User's v5.2 submission was **rejected by Kaggle**. The 49 previously-accepted tasks worked, but the 3 new models (gravity.py, edge.py, mode.py) likely failed `onnx_tool.loadmodel()` shape inference or profiling. The local static profiler returned numbers that looked valid, so the user had no warning before submitting.
- **Root cause**:
1. User originally coded `score_network` to call `neurogolf_utils.score_network()` directly β which uses `onnx_tool` and surfaces errors properly.
2. Agent's v5 refactor wrapped it in `try/except: pass` and added `_static_profile()` fallback.
3. `_static_profile()` only counts Conv MACs (misses ReduceSum, Where, MatMul, etc.), only counts initializer bytes, and does NOT verify static shapes or check `onnx_tool` compatibility.
4. The fallback **hides failures** β models that Kaggle's `score_network` would reject appear to score fine locally.
- **The official validation pipeline** (from `neurogolf_utils.py`):
1. `check_network(filename)` β file size β€ 1.44MB
2. `onnxruntime.InferenceSession(filename)` β model loads
3. `verify_subset(session, examples)` β correct outputs on all splits
4. `score_network(filename)` β uses `onnx_tool.loadmodel()` β `g.shape_infer()` β `g.profile()` β checks `g.valid_profile`, banned ops (UPPERCASE), negative memory. Returns `(None, None, None)` if ANY of these fail β model is NOT READY for submission.
- **What the static profiler gets wrong**:
- Only counts Conv MACs β gravity model has Conv+ReduceSum+Where+Greater+And+Not per step, all uncounted
- Banned op check uses mixed-case `{'Loop', 'Scan', ...}` but Kaggle checks `op_type.upper()` against `["LOOP", "SCAN", ...]`
- No `onnx.checker.check_model()` call
- No static shape verification
- No `onnx_tool` compatibility check
- **Rule**: NEVER silently fall back to a weaker validator. If the official scoring tool fails on a model, that model MUST be treated as unsolved. Surface the error, don't hide it.
- **Rule**: NEVER change the user's validation pipeline without understanding what it does. The user's `score_network` call was correct β it used `onnx_tool` directly.
### Fix Plan (must be done before next submission):
1. **profiler.py**: Remove silent fallback. If `onnx_tool` is available, use it. If it returns `(None, None, None)`, the model is REJECTED (unsolved). If `onnx_tool` is not installed, print a loud WARNING that scores are approximate and may not match Kaggle.
2. **validators.py**: Add `check_network()` equivalent β file size check (already done), `onnx.checker.check_model()`, banned op scan (UPPERCASE comparison), static shape verification on all tensors.
3. **solver_registry.py**: After a model passes `validate()` (correct outputs), also run `score_network()` from profiler. If it returns `(None, None, None)` β treat model as failed, try next solver. This catches models that produce correct outputs but can't be scored by Kaggle.
4. **main.py**: `--strict_size` already stops on oversized files. Add `--strict_score` (default True) β stop if any solved model returns `(None, None, None)` from `score_network()`.
5. **Test on Kaggle notebook**: Before submitting, run `neurogolf_utils.verify_network()` on ALL solved models in a Kaggle notebook. This is the ONLY way to be sure β local testing without `onnx_tool` cannot catch all failure modes.
### 2026-04-26: Agent put entire 1400-line codebase into a single file, repeatedly overwrote user's code
- **What**: When implementing v5 opset 17 changes, agent uploaded the entire solver as a single `neurogolf_solver.py` file β three times. Each upload overwrote the user's `run_tasks`, `main`, and W&B code that the agent couldn't read (the read tool truncates at ~1000 lines).
- **Result**: User's W&B logging code was deleted. User's `run_tasks` function was deleted. User had to point agent to a specific commit (3f3d372) to recover.
- **Root cause**: (1) Agent couldn't read the tail of the file due to tool truncation, so it rewrote the entire file from scratch instead of making surgical edits. (2) Agent prioritized "getting it done" over preserving existing working code.
- **Rule**: NEVER rewrite an entire file when you can't read all of it. Make surgical edits. NEVER destroy code you can't see.
### 2026-04-26: lstsq SVD non-convergence crash on task 313
- **What**: `np.linalg.lstsq(P, T_oh, rcond=None)` raised `LinAlgError: SVD did not converge`.
- **Fix**: Wrapped lstsq in `try/except (LinAlgError, ValueError): return None` in all call sites.
- **Rule**: EVERY lstsq call must be guarded.
### 2026-04-26: ReduceSum axes attribute invalid in opset 17
- **What**: Code used axes as attribute instead of tensor input (opset 13+ requirement).
- **Fix**: Created `_build_reducesum()` helper with axes as int64 initializer tensor.
- **Rule**: Audit ALL operators for breaking API changes when changing opset.
### 2026-04-26: Fake excluded tasks {21, 55, 80, 184, 202, 366}
- **What**: Agent added 6 "excluded" tasks to constants.py. There are NO excluded tasks β all 400 count.
- **Fix**: `EXCLUDED_TASKS = set()`
- **Rule**: All 400 tasks must be attempted. Do not invent exclusions.
### 2026-04-26: est_lb inflated by adding unsolvedΓ1.0
- **What**: `est_lb = total_score + unsolved_count * 1.0` double-counted unsolved task scores.
- **Fix**: Report only solved score. Unsolved tasks get 1.0 on Kaggle automatically.
- **Rule**: est_lb should reflect only what we earn from solved tasks.
### 2026-04-25: Agent wrote 1919 lines of v5 code WITHOUT running full 400-task arc-gen validation
- **Rule**: NEVER mark a feature as done until validated against full arc-gen.
### 2026-04-25: Agent created version-named file violating project convention
- **Rule**: No version numbers in filenames.
### 2026-04-25: Agent claimed LOOCV Ridge tuning would improve arc-gen without evidence
- **Rule**: Theory from papers is NOT proof. Test first.
### 2026-04-25: Agent misrepresented user's intent β BLENDING is NOT the strategy
- **Rule**: LEARNING.md reflects USER'S strategy.
### 2026-04-25: Composition detectors, channel reduction wrapper β untested dead code
- **Rule**: Only add a solver if it demonstrably solves β₯1 task.
### 2026-04-25: Agent delivered untested code and asked user to validate it
- **Rule**: VALIDATE FIRST, DELIVER SECOND.
### 2026-04-24: PyTorch 2-layer conv β fits training but doesn't generalize to arc-gen
### 2026-04-24: Arc-gen in lstsq fitting exposes overfitting
### 2026-04-24: CuPy/GPU for lstsq β DOES NOT HELP
### 2026-04-24: Channel Gather for non-permutation color maps β WRONG OUTPUT
### 2026-04-24: ARC-GEN not loaded β THE #1 SCORE KILLER (v3βv4 fix)
### 2026-04-24: s_flip used GatherElements β OPSET 11 BUG
### 2026-04-24: score_network fallback returned (0,0,0)
### 2026-04-24: Ignored EXCLUDED tasks
## Competitive Intelligence
### What Others Do (For Awareness Only β We Do NOT Blend)
#### Why top notebooks score 4000+ and we score ~670
Top notebooks are **BLENDERS** β they assemble pre-solved ONNX models from public sources.
**Our strategy**: Build our own solver. No blending. No public datasets.
#### The 6 Key Techniques They Have That We Lack
1. **Opset 17** β β
DONE in v5. Slice+Transpose for near-zero cost transforms.
2. **Channel Reduction Wrapper** β π² Not yet. Conv1x1(10βN) β transform β Conv1x1(Nβ10).
3. **Composition Detectors** β π² Not yet. Need to scan 400 tasks to find actual instances first.
4. **Best-of-N Model Selection** β π² Not yet. Generate 20+ candidates, keep cheapest valid.
5. **ONNX Optimizer Pass** β π² Not yet. onnxoptimizer.optimize() for dead-code elimination.
6. **LLM Rescue** β π² Not yet. Per-task ONNX graphs for algorithmic tasks (gravity, outline, etc.)
## Deep Research Findings
### Exp 3: PCA/Truncated SVD Before lstsq β FULL RESULTS (2026-04-26)
**Implementation:** Refactored conv.py into composable primitives:
- `_build_patch_matrix(exs, ks, bias, full_30)` β P, T, T_oh
- `_solve_weights(P, T, T_oh)` β WT via raw lstsq
- `_solve_weights_pcr(P, T, T_oh, thresholds)` β WT via PCA regression
- `_extract_weights(WT, ks, bias)` β Wconv, B for ONNX
**Full 400-task run:** 0 PCR solves, 0 regressions.
**Conclusion:** Architecture mismatch, not regularization. Regularization experiments exhausted.
### ONNX Opset 17 Migration Notes (2026-04-26)
**Breaking changes from opset 10:**
| Operator | Opset 10 | Opset 13+ (incl. 17) |
|----------|----------|----------------------|
| ReduceSum | axes as **attribute** | axes as **tensor input** |
| ReduceMean | axes as **attribute** | axes as **tensor input** |
| Pad | pads as **attribute** | pads as **tensor input** (since opset 11) |
| Slice | no steps input | **steps** added as 5th tensor input |
### Official Scoring Pipeline (from neurogolf_utils.py) β READ BEFORE CODING
```python
# This is what Kaggle runs. Our validator MUST match this.
def check_network(filename):
# 1. File must exist
# 2. File size β€ 1.44MB (1.44 * 1024 * 1024 bytes)
def score_network(filename):
# Uses onnx_tool.loadmodel() β shape_infer() β profile()
# Checks: g.valid_profile (static shapes required)
# Checks: op_type.upper() not in ["LOOP","SCAN","NONZERO","UNIQUE","SCRIPT","FUNCTION"]
# Checks: g.nodemap[key].memory >= 0
# Returns (macs, memory, params) or (None, None, None) on ANY failure
# (None, None, None) = "Your network performance could not be measured" = REJECTED
def verify_network(network, task_num, examples):
# 1. onnx.save β check_network (size)
# 2. InferenceSession (loads ok?)
# 3. verify_subset on train+test (correct outputs?)
# 4. verify_subset on arc-gen (correct outputs?)
# 5. score_network (scoreable by onnx_tool?)
# ALL must pass for "IS READY for submission"
```
## What Has NOT Worked
| Technique | Result | Why |
|-----------|--------|-----|
| **PCA/Truncated SVD (Exp 3)** | **0/400 PCR solves** | **Signal in noise dims; unsolved tasks = architecture mismatch** |
| **Silent profiler fallback** | **Kaggle rejection** | **Hides onnx_tool failures, returns fake scores** |
| Ridge/LOOCV Ξ» | Fails arc-gen | Catastrophic, not benign overfitting |
| Skip ks=5,7,9 (Exp 1) | Hurts 2 tasks | Some tasks genuinely need interpolation-regime ks |
| CuPy GPU lstsq | OOM + same speed | O(nΒ³) SVD bottleneck |
| PyTorch 2-layer (no arc-gen) | 0/30 arc-gen pass | Memorizes training |
## Technical Notes
### ARC-AGI Task Statistics
- 400 tasks total. NO excluded tasks β all 400 count.
- ~25 analytical tasks, ~25 conv tasks survive arc-gen, ~350 unsolved
### Score Calculation (official, from neurogolf_utils.py)
```python
# Uses onnx_tool for exact MACs/memory/params β NOT our static profiler
macs, memory, params = score_network(filename) # onnx_tool based
points = max(1.0, 25.0 - math.log(macs + memory + params))
```
## Session Notes for Future Agents
**Before touching code:**
1. Read this file (LEARNING.md) β all the way through
2. Read SKILL.md β especially "Development Methodology" and "Submission Checklist"
3. Read TODO.md β check experiment log and research queue
4. Run the current solver on 20-50 tasks to establish baseline
5. Only then: design experiment, implement, validate, compare
**Code structure (v5.2):**
- The solver is a Python package at `neurogolf_solver/`
- Run with `python -m neurogolf_solver.main [args]`
- Solvers in separate files: `analytical.py`, `geometric.py`, `tiling.py`, `conv.py`, `gravity.py`, `edge.py`, `mode.py`
- Edit individual files surgically β NEVER rewrite the whole package
- The legacy `neurogolf_solver.py` at root is v4, kept for reference β do NOT edit it
**CRITICAL: Scoring & Validation:**
- The ONLY reliable scoring is `neurogolf_utils.score_network()` which uses `onnx_tool`
- `profiler.py`'s `_static_profile()` is a fallback that DOES NOT match Kaggle scoring
- Before submitting: run `neurogolf_utils.verify_network()` on ALL solved models in a Kaggle notebook
- If `score_network` returns `(None, None, None)`, the model is REJECTED β do not submit it
**Before claiming a feature works:**
- Must pass arc-gen on β₯20 tasks (or full 400 if cheap)
- Must pass `neurogolf_utils.verify_network()` β not just our own validate()
- Must include A/B comparison
**Before uploading code:**
- Must have run full 400-task arc-gen validation
- Must confirm total score β₯ previous best
- NEVER change the scoring/validation pipeline without understanding what it does
|