rogermt commited on
Commit
92d1187
·
verified ·
1 Parent(s): 260153f

Add LEARNING.md - full history, mistakes log, competitive intel, technical deep-dives

Browse files
Files changed (1) hide show
  1. LEARNING.md +159 -0
LEARNING.md ADDED
@@ -0,0 +1,159 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # NeuroGolf Solver — Learning & History
2
+
3
+ > This file accumulates everything learned across sessions. Read it to avoid repeating mistakes and to understand what techniques work. Newest entries first within each section.
4
+
5
+ ## Version History
6
+
7
+ | Version | Date | Tasks (arc-gen validated) | Est LB | Key Changes |
8
+ |---------|------|--------------------------|--------|-------------|
9
+ | v4.1 | 2026-04-24 | 50 | ~670 | Color map Gather for permutations (+15 pts) |
10
+ | v4.0 | 2026-04-24 | 50 | ~656 | ARC-GEN validation, new analytical solvers, s_flip fix, static profiler, submission.csv |
11
+ | v3 | 2026-04-24 | 307 (local) / ~40 (LB) | 501 | Added concat_enhanced, varshape_spatial_gather, conv_var_diff |
12
+ | v2 | prior | 294 (local) | unknown | Spatial_gather, variable-shape conv, diff-shape conv |
13
+ | v1 | prior | 128 | unknown | Conv solver only |
14
+
15
+ ## Mistakes Log (DO NOT REPEAT)
16
+
17
+ ### 2026-04-24: CuPy/GPU for lstsq — DOES NOT HELP
18
+ - **What**: Swapped numpy→cupy to GPU-accelerate lstsq conv fitting
19
+ - **Result**: GPU hit 90%, crashed on task 4 (OOM), fell back to CPU, same speed
20
+ - **Root cause**: lstsq is O(n³) — same algorithmic cost on any device. For ks=29 on 16 examples of 21×21: patch matrix is 7056×8410 = 59M elements, ~450MB float64. GPU memory fills and crashes.
21
+ - **Rule**: NEVER try to GPU-accelerate lstsq. The bottleneck is algorithmic, not device. Use `--conv_budget` to cap time.
22
+
23
+ ### 2026-04-24: Channel Gather for non-permutation color maps — WRONG OUTPUT
24
+ - **What**: Used `Gather(axis=1)` for all color maps
25
+ - **Result**: Tasks 276, 309 produced double-active channels (ch2=1 AND ch6=1 simultaneously)
26
+ - **Root cause**: Gather duplicates source channels. For map `{6→2}`, `gi[2]=6` copies ch6 to ch2, but ch6 also stays via `gi[6]=6`. Not valid one-hot.
27
+ - **Rule**: Channel Gather ONLY works for **permutation** color maps (bijective, closed set). Non-permutations need Conv 1×1.
28
+
29
+ ### 2026-04-24: ARC-GEN not loaded — THE #1 SCORE KILLER (v3→v4 fix)
30
+ - **What**: v3 `validate()` had `if 'arc-gen' in td` check, but arc-gen was never loaded into `td`
31
+ - **Result**: 3267 local score → 501 LB. 85% of conv models fail on Kaggle's arc-gen validation
32
+ - **Root cause**: `load_tasks_dir()` only loaded train+test from ARC-AGI files. Arc-gen data is in separate `ARC-GEN-100K/` files.
33
+ - **Rule**: ALWAYS load arc-gen data. ALWAYS validate against it locally before submission.
34
+
35
+ ### 2026-04-24: s_flip used GatherElements — OPSET 11 BUG (v3→v4 fix)
36
+ - **What**: `s_flip` solver used `GatherElements` with 4D indices
37
+ - **Result**: Works on old ORT, fails on ORT 1.25+ which enforces opset correctly
38
+ - **Rule**: NEVER use GatherElements with opset 10. Use `_build_gather_model()` (Gather on flattened spatial dim).
39
+
40
+ ### 2026-04-24: score_network fallback returned (0,0,0) — WRONG COSTS
41
+ - **What**: When onnx_tool not installed, `score_network` returned zeros
42
+ - **Result**: All costs appeared as 0, inflated estimated score
43
+ - **Rule**: Use static profiler that counts params+nbytes+macs by walking the ONNX graph. Matches Kaggle's calculation.
44
+
45
+ ### 2026-04-24: Ignored EXCLUDED tasks
46
+ - **What**: Tried to solve tasks {21, 55, 80, 184, 202, 366}
47
+ - **Rule**: Skip these. Officially excluded, score 0 regardless.
48
+
49
+ ### Prior: GatherElements in v2 gather helpers
50
+ - **What**: `_build_gather_model()` used GatherElements (opset 11)
51
+ - **Fix**: Changed to Gather(opset 1) with 1D indices on flattened [1,10,900] spatial dim.
52
+
53
+ ## Competitive Intelligence
54
+
55
+ ### High-Scoring Notebook Architecture (2026-04-24 analysis)
56
+
57
+ The top notebooks (4200+ points) are **BLENDERS**, not solvers:
58
+ 1. `neurogolf-2026-tiny-onnx-solver` (est 4197): Blends 12+ other notebooks' submission.zip files. Its own solver adds 0 new tasks.
59
+ 2. `4200-v5-neurogolf-fix` (est 5725): Same blend + 5 hand-crafted "LLM rescue" ONNX models for specific tasks.
60
+ 3. `the-2026-neurogolf-championship`: Own solver (288 tasks) + blend from other sources.
61
+
62
+ **Key techniques competitors have that we still lack:**
63
+ - PyTorch learned conv: multi-seed Adam (seeds 0,7,42), 3000 steps, ternary weight snapping
64
+ - Two-layer conv: Conv→ReLU→Conv for nonlinear patterns
65
+ - Channel reduction: reduce 10→N channels (fewer colors = cheaper)
66
+ - Composition detectors: rotation+color, flip+color, transpose+color
67
+ - Extract outline detector
68
+ - Blending from multiple notebook outputs
69
+
70
+ **Opset insight**: All top notebooks use opset 17 freely. It works on Kaggle.
71
+
72
+ ### Cost Benchmarks
73
+
74
+ | Model Type | Typical Cost | Score |
75
+ |-----------|-------------|-------|
76
+ | Identity | 0 | 25.0 |
77
+ | Transpose (perm only) | 0 | 25.0 |
78
+ | Color map (Gather, permutation) | 50 | 21.1 |
79
+ | Color map (Conv 1×1) | 90,500 | 13.6 |
80
+ | Analytical (gather-based) | 12,663 | 15.5 |
81
+ | Shift (gather) | 57,663 | 14.0 |
82
+ | Conv ks=1 | 814,590–814,662 | 11.4 |
83
+ | Conv ks=5 | 4,589,390 | 9.7 |
84
+ | Conv ks=11 | 11,105,390 | 8.8 |
85
+ | Conv ks=29 | 66,129,390 | 7.0 |
86
+
87
+ ### ARC-GEN Survival Rates
88
+
89
+ From v4.0 full run (400 tasks):
90
+ - **Analytical solvers**: 100% arc-gen survival (25/25 passed)
91
+ - **conv_fixed (ks=1)**: ~80% survival (8/~10 passed)
92
+ - **conv_var**: ~14% survival (17/~125 passed) — most fail with larger kernels
93
+ - **conv_diff**: ~3% survival (1/~39 passed)
94
+ - **spatial_gather**: ~25% survival (4/16 passed) — surprising failures
95
+
96
+ Arc-gen fitting (same-size examples in lstsq) recovered ~10 additional conv tasks in v4.
97
+
98
+ ## Technical Deep-Dives
99
+
100
+ ### Why Conv Models Fail ARC-GEN
101
+
102
+ Conv models fitted via lstsq on 6 train+test examples learn weights that perfectly separate those examples. But arc-gen has 250+ examples with:
103
+ - Different pixel arrangements (same grid size but different content)
104
+ - Edge cases the 6 training examples don't cover
105
+ - The conv weights are a linear classifier — if the decision boundary isn't robust, new examples fall on the wrong side
106
+
107
+ **What helps**: Including arc-gen examples in lstsq fitting (when grid sizes match). v4 adds up to 10 arc-gen examples, giving 16 total. This improved conv_var from 7→17 arc-gen validated tasks.
108
+
109
+ **What doesn't help**: Including variable-size arc-gen examples in lstsq. The feature dimension changes with grid size for fixed-shape conv, and for variable-shape conv the 30×30 embedding creates too many zero-padded patches that dominate the lstsq solution.
110
+
111
+ ### lstsq Performance Characteristics
112
+
113
+ For kernel size `ks` on `N` examples of size `H×W`:
114
+ ```
115
+ Features = 10 × ks² (+ 1 if bias)
116
+ Rows = N × H × W
117
+ lstsq cost = O(rows × features²) [for rows > features]
118
+ = O(rows² × features) [for features > rows]
119
+ ```
120
+
121
+ Practical timing (CPU, numpy):
122
+ - ks=1, 6 examples of 10×10: ~0.001s
123
+ - ks=5, 16 examples of 15×15: ~0.1s
124
+ - ks=15, 16 examples of 20×20: ~5s
125
+ - ks=29, 16 examples of 21×21: ~30s
126
+
127
+ ### The 113 Same-Size Fixed Tasks
128
+
129
+ Analysis found 113 unsolved same-shape tasks where arc-gen uses IDENTICAL grid sizes to train/test. These are prime targets for arc-gen-enhanced lstsq fitting. v4 recovers ~10 of these; the rest need larger kernels or multi-layer networks.
130
+
131
+ ### Variable-Shape Tasks (77 unsolved)
132
+
133
+ These tasks have input-dependent output shapes. No static ONNX graph can produce different-sized outputs. The only approach: conv learns to place content in the right 30×30 region, masked by `ReduceSum(input)`. But this fails when output extends beyond input bounds or when the spatial mapping depends on content.
134
+
135
+ ## Data Notes
136
+
137
+ ### ARC-GEN File Format
138
+ ```python
139
+ # ARC-GEN-100K/{hex_id}.json — LIST of examples (not dict):
140
+ [{"input": [[...]], "output": [[...]]}, ...]
141
+
142
+ # On Kaggle, already embedded in task JSON:
143
+ {"train": [...], "test": [...], "arc-gen": [...]}
144
+ ```
145
+
146
+ ### Task Numbering
147
+ Tasks are numbered 1-400 based on alphabetical sort of hex filenames in `ARC-AGI/data/training/`. The hex ID → task number mapping is stable.
148
+
149
+ ### ARC-GEN Generator
150
+ https://github.com/google/ARC-GEN — Can generate MORE examples per task for better fitting. Not yet explored.
151
+
152
+ ## Reference Notebooks (in repo as neurogolf-2026-solver-notebooks.zip)
153
+
154
+ | Notebook | Est LB | Tasks Solved | Technique | Key Source Count |
155
+ |----------|--------|-------------|-----------|-----------------|
156
+ | neurogolf-2026-tiny-onnx-solver | ~4200 | 338 | Mega-blend 12+ zips | 203 from mega-agi-ensemble |
157
+ | 4200-v5-neurogolf-fix | ~5725 | 341 | Same blend + 5 manual | 338 from zip_2 |
158
+ | the-2026-neurogolf-championship | ~3200 est | 288 | Own solver + blend | gravity, outline, composition |
159
+ | neurogolf-logic-driven-ensembling | — | 401 | Pure ensembling | — |