rogermt commited on
Commit
2d9804f
Β·
verified Β·
1 Parent(s): d189f4f

Add deep competitive analysis from notebook dissection (2026-04-25)

Browse files
Files changed (1) hide show
  1. LEARNING.md +233 -14
LEARNING.md CHANGED
@@ -65,6 +65,175 @@
65
 
66
  ## Competitive Intelligence
67
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
68
  ### High-Scoring Notebook Architecture (2026-04-24 analysis)
69
 
70
  The top notebooks (4200+ points) are **BLENDERS**, not solvers:
@@ -84,18 +253,17 @@ The top notebooks (4200+ points) are **BLENDERS**, not solvers:
84
 
85
  ### Cost Benchmarks
86
 
87
- | Model Type | Typical Cost | Score |
88
- |-----------|-------------|-------|
89
- | Identity | 0 | 25.0 |
90
- | Transpose (perm only) | 0 | 25.0 |
91
- | Color map (Gather, permutation) | 50 | 21.1 |
92
- | Color map (Conv 1Γ—1) | 90,500 | 13.6 |
93
- | Analytical (gather-based) | 12,663 | 15.5 |
94
- | Shift (gather) | 57,663 | 14.0 |
95
- | Conv ks=1 | 814,590–814,662 | 11.4 |
96
- | Conv ks=5 | 4,589,390 | 9.7 |
97
- | Conv ks=11 | 11,105,390 | 8.8 |
98
- | Conv ks=29 | 66,129,390 | 7.0 |
99
 
100
  ### ARC-GEN Survival Rates
101
 
@@ -145,6 +313,30 @@ Analysis found 113 unsolved same-shape tasks where arc-gen uses IDENTICAL grid s
145
 
146
  These tasks have input-dependent output shapes. No static ONNX graph can produce different-sized outputs. The only approach: conv learns to place content in the right 30Γ—30 region, masked by `ReduceSum(input)`. But this fails when output extends beyond input bounds or when the spatial mapping depends on content.
147
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
148
  ## Data Notes
149
 
150
  ### ARC-GEN File Format
@@ -162,11 +354,38 @@ Tasks are numbered 1-400 based on alphabetical sort of hex filenames in `ARC-AGI
162
  ### ARC-GEN Generator
163
  https://github.com/google/ARC-GEN β€” Can generate MORE examples per task for better fitting. Not yet explored.
164
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
165
  ## Reference Notebooks (in repo as neurogolf-2026-solver-notebooks.zip)
166
 
167
  | Notebook | Est LB | Tasks Solved | Technique | Key Source Count |
168
  |----------|--------|-------------|-----------|-----------------|
169
  | neurogolf-2026-tiny-onnx-solver | ~4200 | 338 | Mega-blend 12+ zips | 203 from mega-agi-ensemble |
170
- | 4200-v5-neurogolf-fix | ~5725 | 341 | Same blend + 5 manual | 338 from zip_2 |
 
171
  | the-2026-neurogolf-championship | ~3200 est | 288 | Own solver + blend | gravity, outline, composition |
172
- | neurogolf-logic-driven-ensembling | β€” | 401 | Pure ensembling | β€” |
 
65
 
66
  ## Competitive Intelligence
67
 
68
+ ### Deep Notebook Dissection (2026-04-25) β€” THE DEFINITIVE ANALYSIS
69
+
70
+ #### Why top notebooks score 4000+ and we score ~670
71
+
72
+ The top notebooks are **BLENDERS**, not solvers. The entire leaderboard meta-game is about
73
+ assembling the best portfolio of pre-solved ONNX models from public sources, not about
74
+ building a better solver from scratch.
75
+
76
+ #### Quantified Breakdown
77
+
78
+ | Notebook | Own Solver Tasks | Blended from Others | Total Solved | Est Score |
79
+ |---|---|---|---|---|
80
+ | `neurogolf-2026-tiny-onnx-solver` | **0** from own solver | 338 from 12 ZIP + 5 dataset dirs | 338 | ~4200 |
81
+ | `4200-v5-neurogolf-fix` | **5** manual LLM rescue | 341 from 5 ZIP sources | 346 | ~5700 |
82
+ | `the-2026-neurogolf-championship` | ~20 from own solver | 288 from **24 Kaggle dataset** sources | 288 | ~3600 |
83
+ | `neurogolf-4200-solver` (full solver) | ~20 analytical | 288 from 24 dataset sources | 288 | ~3600 |
84
+ | **Our solver v4** | **~50** from solver | **0 blended** | 50 | ~670 |
85
+
86
+ #### How the Blend Pipeline Works (from `neurogolf-2026-tiny-onnx-solver`)
87
+
88
+ ```
89
+ Phase 1: ZIP Blend
90
+ - Auto-discovers ALL submission.zip files from attached Kaggle notebook outputs
91
+ - 12 sources: mega-agi-ensemble(203), the-2026-neurogolf-championship(105),
92
+ neurogolf-2026-starter(77), baseline-for-ensemble-1k(8), infinitesimals(4),
93
+ arc-nano-engine(2), + 6 more with 0 valid models
94
+ - Each model: strict_validate(raw, task_id) using neurogolf_utils
95
+ β†’ verify_subset(session, train+test) + verify_subset(session, arc-gen)
96
+ β†’ score_network(path) for official cost
97
+ - Keep cheapest valid model per task
98
+
99
+ Phase 2: Dataset ONNX dirs
100
+ - Scans loose .onnx files from attached dataset directories
101
+ - Same strict validation
102
+
103
+ Phase 3: Own solver (minimal)
104
+ - Only runs on unsolved tasks (62 remaining after blend)
105
+ - Detectors: identity, color_map, rotation, flip, transpose, tile, scale,
106
+ nonuniform_scale, mirror_h/v, quad_mirror, shift, fixed_crop,
107
+ rot+color, flip+color, transpose+color, gravity, extract_outline
108
+ - Learned conv: try_learned_conv(ks=1,3,5) with PyTorch + ternary snap
109
+ - Two-layer conv: Conv→ReLU→Conv(ks1=3,5, ks2=1)
110
+ - Result: +0 new tasks (all 62 remaining were too hard)
111
+ ```
112
+
113
+ Result after all phases: 338/400 tasks, est 4197.5 points.
114
+
115
+ #### How `the-2026-neurogolf-championship` Gets 288 Tasks (from `neurogolf-4200-solver`)
116
+
117
+ This one has the richest **dataset source** collection β€” 24 Kaggle datasets:
118
+ ```
119
+ Cross_Source: 227 ONNX Task_Transformation: 266 Golf_Aura: 254
120
+ ONNX_Solutions_v31: 252 Publi_Data: 206 Agent: 206
121
+ Logic: 204 Logic_for_ARC: 204 Yash_Submission: 172
122
+ Yash_Submission_v1: 168 Claude_Golf: 160 Ashok_Submission: 160
123
+ NeuroGolf1k_A: 158 NeuroGolf1k_B: 132
124
+ TestGolf_S014-S203: 9Γ— 207 each (task-specific strong models)
125
+ Total: ~4632 pre-solved ONNX models across sources
126
+ ```
127
+
128
+ After official validation: 288 unique tasks solved.
129
+ Source breakdown: Cross_Source=169, Task_Transformation=55, ONNX_Solutions_v31=49, Golf_Aura=11.
130
+
131
+ #### How `4200-v5-neurogolf-fix` Gets 341+ Tasks
132
+
133
+ Blends from 5 ZIP sources:
134
+ ```
135
+ SOURCE_ZIPS:
136
+ '1': neurogolf-2026-starter (335 models)
137
+ '2': neurogolf-2026-tiny-onnx-solver (338 models) ← the blend notebook itself!
138
+ '5': infinitesimals (341 models)
139
+ '7': logic-decoder (338 models)
140
+ '8': neurogolf-2026-blended-341-tasks-lb-4215 (341 models)
141
+ ```
142
+
143
+ Plus **5 hand-crafted "LLM Rescue" ONNX models** for tasks 076, 096, 118, 133, 264.
144
+ Each is a "huge static graph" β€” a per-task ONNX network built by an LLM that embeds
145
+ the entire set of known examples and builds a matching/dispatch circuit.
146
+
147
+ #### The 6 Key Techniques They Have That We Lack
148
+
149
+ **1. Opset 17 (NOT 10)**
150
+ All top notebooks use `oh.make_opsetid('', 17)`. Opset 17 works fine on Kaggle.
151
+ This enables:
152
+ - `Slice` with negative steps (for flip/rotate β€” zero MACs, zero initializers)
153
+ - `Pad` with dynamic pads
154
+ - `ScatterND` for hash-based matchers
155
+ - `Where` for conditional logic
156
+
157
+ Their rot90 = `Crop β†’ Transpose β†’ Slice(reverse)` = **~0 cost**.
158
+ Our rot90 = Gather with 900-element int64 index = **~12,663 cost**.
159
+ **Switching to opset 17 alone would ~halve cost on all analytical solvers.**
160
+
161
+ **2. Cheap Slice-based ONNX Builders (zero-cost transforms)**
162
+ Instead of Gather-index models, they use:
163
+ ```python
164
+ def make_rot90cw(h, w):
165
+ nodes = _crop('input', 'c', h, w)
166
+ nodes += [make_node('Transpose', ['c'], ['t'], perm=[0,1,3,2])]
167
+ nodes += _slice_reverse([3], [h], 't', 'output') # Slice with step=-1
168
+ return _model(nodes, 'rot90cw')
169
+ ```
170
+ No initializers, no Gather indices, no masks. Cost β‰ˆ 0.
171
+
172
+ **3. PyTorch Learned Conv with Ternary Snap**
173
+ ```python
174
+ def try_learned_conv(train, all_pairs, kernel_size=1, steps=3000, lr=0.03, seeds=(0,7,42)):
175
+ for seed in seeds:
176
+ conv = nn.Conv2d(10, 10, ks, padding=ks//2, bias=False)
177
+ # Adam, 3000 steps, MSE loss
178
+ # Try both float weights AND ternary-snapped {-1, 0, 1}
179
+ for w_cand in [w_float, _ternary_snap(w_float)]:
180
+ model = make_conv_onnx(w_cand)
181
+ if verify_model(model, all_pairs): # validates against train+test+arc-gen
182
+ candidates.append(model)
183
+ ```
184
+ Key insight: ternary weights are much cheaper (fewer unique values = smaller model).
185
+
186
+ **4. Two-Layer Conv (Conv→ReLU→Conv)**
187
+ For nonlinear patterns that single-layer conv can't learn:
188
+ ```python
189
+ net = Sequential(
190
+ Conv2d(10, hidden, ks1, padding=ks1//2, bias=False),
191
+ ReLU(),
192
+ Conv2d(hidden, 10, ks2, padding=ks2//2, bias=False),
193
+ )
194
+ ```
195
+ Tries ks1=3,5 with ks2=1, hidden=10. Both float and ternary-snapped versions tested.
196
+
197
+ **5. Channel Reduction**
198
+ When only 4-5 colors are used: `Conv1x1(10→N) → transform → Conv1x1(N→10)`.
199
+ Fewer channels = smaller conv kernels = lower MACs = higher score per task.
200
+
201
+ **6. LLM Rescue / Hash-Based Matchers**
202
+ For tasks that no automated solver can handle, they build hand-crafted ONNX graphs:
203
+ - **Task 118 (hash matcher)**: `MatMul(flatten(input), hash_weights) β†’ Equal(hash, target_per_example) β†’ ScatterND(delta)`. Hashes each input to a unique 2D vector, matches against all known examples, applies the stored diff.
204
+ - **Task 096 (run-length + gap pattern detector)**: Builds a huge computation graph with depthwise convolutions to detect run lengths and gap patterns, then dispatches to the correct output.
205
+ - **Task 076 (combinatorial matcher)**: Gathers non-zero positions, computes falling factorial polynomial to identify which known example matches, applies stored output template.
206
+ - **Task 264 (3Γ—3 shape detector)**: Uses 9 convolution kernels (3Γ—3 shape masks) to detect which L/T/line shape is present, then dispatches to the correct pattern.
207
+
208
+ These are the hardest tasks β€” the ones that need actual algorithmic reasoning encoded in ONNX.
209
+
210
+ #### Can We Reach 4000+ WITHOUT Blending?
211
+
212
+ **Short answer: Yes, but it's the hard path.**
213
+
214
+ The 338 blended models were each independently solved by *someone's* solver. If we could
215
+ make our own solver generate arc-gen-validated models for ~300 tasks, we'd match the blenders.
216
+
217
+ **What's blocking us (breakdown of the ~250 tasks we solve locally but fail arc-gen):**
218
+
219
+ | Category | Count | Why it Fails | Fix |
220
+ |---|---|---|---|
221
+ | lstsq overfitting (ksβ‰₯5) | ~170 | Underdetermined lstsq memorizes train, fails arc-gen | Train on arc-gen data (need GPU for PyTorch), or find smaller ks that generalizes |
222
+ | lstsq overfitting (ks=1-3) | ~30 | Even small kernels can overfit with few examples | More arc-gen examples in fitting |
223
+ | spatial_gather false positives | ~12 | Coincidental pixel alignments in train don't hold for arc-gen | Validate spatial_gather against arc-gen before accepting |
224
+ | Variable diff-shape | ~40 | No static ONNX for input-dependent output shapes | Fundamentally unsolvable with static ONNX (need hash matchers) |
225
+
226
+ **Realistic path to 3000+ without blending:**
227
+ 1. Switch to opset 17 β†’ ~2x score per analytical task (~+200 pts)
228
+ 2. PyTorch learned conv on GPU with arc-gen fitting β†’ ~+50-100 tasks
229
+ 3. Hash-based matchers for ~20 hard tasks β†’ ~+300 pts
230
+ 4. Channel reduction β†’ ~-20% cost across board (~+100 pts)
231
+ 5. Total estimate: ~150-200 validated tasks Γ— ~12 avg score = ~2000-2500 pts
232
+
233
+ **To actually reach 4000+, you'd need ~330+ validated tasks.** That requires either
234
+ blending OR solving the hard algorithmic tasks (gravity, flood fill, counting, etc.)
235
+ which need LLM-generated per-task ONNX graphs.
236
+
237
  ### High-Scoring Notebook Architecture (2026-04-24 analysis)
238
 
239
  The top notebooks (4200+ points) are **BLENDERS**, not solvers:
 
253
 
254
  ### Cost Benchmarks
255
 
256
+ | Model Type | Typical Cost (ours, opset 10) | Their Cost (opset 17) | Score Diff |
257
+ |-----------|------|------|------|
258
+ | Identity | 0 | 0 | β€” |
259
+ | Transpose | 36,000 (Gather-based) | ~0 (perm only) | +10 pts |
260
+ | Rotation | ~165,663 (Gather+mask) | ~0 (Slice+Transpose) | +10 pts |
261
+ | Flip | ~165,663 (Gather+mask) | ~0 (Slice reverse) | +10 pts |
262
+ | Color map (Gather, permutation) | 50 | 50 | β€” |
263
+ | Color map (Conv 1Γ—1) | 90,500 | 90,500 | β€” |
264
+ | Spatial gather | ~12,663 | ~12,663 | β€” |
265
+ | Conv ks=1 | 814,590 | 814,590 | β€” |
266
+ | Conv ks=5 | 4,589,390 | 4,589,390 | β€” |
 
267
 
268
  ### ARC-GEN Survival Rates
269
 
 
313
 
314
  These tasks have input-dependent output shapes. No static ONNX graph can produce different-sized outputs. The only approach: conv learns to place content in the right 30Γ—30 region, masked by `ReduceSum(input)`. But this fails when output extends beyond input bounds or when the spatial mapping depends on content.
315
 
316
+ ### Hash-Based Matcher Architecture (from 4200-v5 notebook)
317
+
318
+ For tasks that are impossible with conv/gather, the top notebooks build **per-task matcher networks**:
319
+
320
+ ```
321
+ Architecture (task 118 example):
322
+ 1. Flatten input: Reshape [1,10,30,30] β†’ [1, 9000]
323
+ 2. Hash: MatMul([1,9000], [9000,2]) β†’ [1,2] (random int weights [-7,+7])
324
+ 3. For each known example i:
325
+ a. Equal(hash, target_hash_i) β†’ bool match
326
+ b. Cast to float, ReduceSum β†’ match_count
327
+ c. Equal(match_count, 2.0) β†’ exact match
328
+ d. ScatterND(zero_grid, diff_indices_i, diff_values_i) β†’ delta_i
329
+ e. Mul(delta_i, match_flag) β†’ conditional_delta_i
330
+ 4. Concat all conditional deltas β†’ ReduceSum β†’ total_delta
331
+ 5. Add(input, total_delta) β†’ output
332
+ ```
333
+
334
+ This works because each input hashes to a unique 2D vector, so the network
335
+ identifies which known example is present and applies the stored transformation.
336
+ Cost is high but the model is guaranteed correct for all known examples.
337
+
338
+ **Requirements**: opset 17 (ScatterND), all examples available at build time.
339
+
340
  ## Data Notes
341
 
342
  ### ARC-GEN File Format
 
354
  ### ARC-GEN Generator
355
  https://github.com/google/ARC-GEN β€” Can generate MORE examples per task for better fitting. Not yet explored.
356
 
357
+ ### Key Kaggle Public Datasets (from notebook analysis)
358
+
359
+ These are the dataset sources that top solvers blend from:
360
+ ```
361
+ limprog/neurogolf-blend/NeuroGolf_blend/Cross-Source β€” 227 ONNX (biggest winner)
362
+ karnakbaevarthur/neurogolf-2026-task-transformation-library β€” 269 ONNX
363
+ sigmaborov/golf-aura β€” 254 ONNX
364
+ needless090/neurogolf-onnx-v31 β€” 252 ONNX
365
+ limprog/neurogolf-blend/NeuroGolf_blend/Publi_Data β€” 206 ONNX
366
+ sigmaborov/golf-solve-agent β€” 206 ONNX
367
+ karnakbaevarthur/logic-for-each-arc-task β€” 204 ONNX
368
+ yash9439/neurogolf-submission β€” 172 ONNX
369
+ daphne4sg/claude-golf β€” 160 ONNX
370
+ hanifnoerrofiq/neurogolf1k β€” 158+132 ONNX
371
+ sigmaborov/test-golf (S_task014..S_task203) β€” 9Γ—207 ONNX (task-specific)
372
+ ```
373
+
374
+ Key notebook submission.zip sources:
375
+ ```
376
+ aliafzal9323/neurogolf-2026-tiny-onnx-solver β€” 338 models (itself a mega-blend)
377
+ sigmaborov/neurogolf-2026-starter β€” 335 models
378
+ jazivxt/infinitesimals β€” 341 models
379
+ konbu17/neurogolf-2026-blended-341-tasks β€” 341 models
380
+ karnakbaevarthur/logic-decoder β€” 338 models
381
+ ```
382
+
383
  ## Reference Notebooks (in repo as neurogolf-2026-solver-notebooks.zip)
384
 
385
  | Notebook | Est LB | Tasks Solved | Technique | Key Source Count |
386
  |----------|--------|-------------|-----------|-----------------|
387
  | neurogolf-2026-tiny-onnx-solver | ~4200 | 338 | Mega-blend 12+ zips | 203 from mega-agi-ensemble |
388
+ | 4200-v5-neurogolf-fix | ~5725 | 341 | Same blend + 5 manual LLM rescue | 338 from zip_2 |
389
+ | neurogolf-4200-solver | ~3600 | 288 | Own solver + 24 dataset sources | Cross_Source=169 |
390
  | the-2026-neurogolf-championship | ~3200 est | 288 | Own solver + blend | gravity, outline, composition |
391
+ | neurogolf-logic-driven-ensembling | β€” | 352 | Pure ensembling (no solver) | 351 from 4275-submission |