rogermt commited on
Commit
021d629
Β·
verified Β·
1 Parent(s): b06267e

Update TODO.md: evidence-based Phase 2 overhaul with confidence ratings, literature citations, coordinated experiment sequence

Browse files
Files changed (1) hide show
  1. TODO.md +199 -69
TODO.md CHANGED
@@ -1,8 +1,11 @@
1
  # NeuroGolf Solver β€” Roadmap
2
 
3
- > Current: v4.3 Β· 50 arc-gen validated Β· ~670 LB Β· Target: 3000+
4
  > Philosophy: **Research β†’ Design β†’ Experiment β†’ Analyze β†’ Research** loop until confirmed score increase.
5
  > Rule: **NEVER claim a feature works without full arc-gen validation on representative tasks.**
 
 
 
6
 
7
  ## Phase 1: Cheap Wins (est +400 pts β†’ ~1100)
8
 
@@ -33,58 +36,184 @@
33
 
34
  ---
35
 
36
- ## Phase 2: Fix Arc-Gen Survival (est +100-150 tasks β†’ ~2000-2500)
37
-
38
- > **This is the #1 blocker.** We solve 307 locally but only 50 survive arc-gen.
39
- > Research (Bartlett et al., Belkin et al., arXiv:2306.13185) shows:
40
- > - Our patch covariance has LOW effective rank (~10-40) vs nβ‰ˆ600 patches
41
- > - This is CATASTROPHIC overfitting regime, NOT benign
42
- > - Ridge/LOOCV Ξ» tuning CANNOT fix this β€” theory predicts failure
43
-
44
- ### 2a: Skip Interpolation Threshold Kernels
45
- - [ ] **Remove ks=5,7,9 from conv fitting** β€” these are at/near double descent peak
46
- - Try ks list: [1, 3, 11, 13, 15, 17, 19, 21, 23, 25, 27, 29]
47
- - Rationale: ks=5 (p=490, nβ‰ˆ600) is worst-case. ks=1 (p=10) is safe. ks=29 (p=8410) is overparameterized but at least past the peak.
48
- - [ ] **Validate**: Full 400 arc-gen. Compare arc-gen survival rate vs v4.
49
- - Accept only if survival rate improves by >10% (5+ more tasks).
50
-
51
- ### 2b: PCA Dimensionality Reduction Before lstsq
52
- - [ ] **PCA pre-processing**: project patch matrix P to top-k components (k=15-25 matching effective rank)
53
- - Fit PCA on training patches, transform both P and test patches, then lstsq on reduced space
54
- - Ensures p_reduced << n, avoiding interpolation regime entirely
55
- - [ ] **Validate**: Test on 20 tasks that currently fail arc-gen at ks=7,9.
56
- - Compare: raw lstsq vs PCA+lstsq. Measure arc-gen pass rate.
57
- - Accept only if >20% of previously-failing tasks now pass.
58
-
59
- ### 2c: Gradient Descent with Early Stopping (Alternative to lstsq)
60
- - [ ] **Iterative solver**: Adam on conv weights, early stop at ~95% train accuracy (don't interpolate)
61
- - Implicit ℓ₁-like regularization β€” theory predicts better generalization than explicit Ridge
62
- - Use small model: ks=3 single-layer or ks=(3,1) two-layer
63
- - [ ] **Validate**: Same 20 failing tasks. Compare lstsq vs early-stopping GD.
64
- - Accept only if >15% improvement in arc-gen survival.
65
-
66
- ### 2d: Lasso / Sparse Regression
67
- - [ ] **Replace np.linalg.lstsq with sklearn.linear_model.Lasso**
68
- - Ξ± tuning via cross-validation on training data
69
- - Matches sparse signal structure of one-hot patches
70
- - [ ] **Validate**: Same 20 failing tasks. Compare lstsq vs Lasso.
71
- - Accept only if >15% improvement.
72
-
73
- ### 2e: PyTorch Multi-Seed with Arc-Gen Training (GPU Required)
74
- - [ ] **Train Conv→ReLU→Conv on train+test+arc-gen** (all available examples matching grid size)
75
- - Multi-seed (0,7,42), 3000 steps, lr=0.03, early stopping on arc-gen loss
76
- - ks=(3,1) or (5,1) two-layer
77
- - **Ternary snap**: after training, snap weights to {-1,0,1}, re-validate on arc-gen
78
- - [ ] **Validate**: Run on 50 tasks. Compare arc-gen survival vs lstsq baseline.
79
- - Needs GPU (T4 minimum). CPU too slow for 400Γ—3 seeds.
80
- - Accept only if >10% improvement AND total runtime <12hr Kaggle limit.
81
-
82
- ### 2f: Generate More ARC-GEN Data
83
- - [ ] **Use ARC-GEN generator** (github.com/google/ARC-GEN) to produce 1000+ examples/task
84
- - More fitting data = more constraints, but ONLY helps if we avoid interpolation regime
85
- - Combine with PCA or GD β€” lstsq with more rows still overfits if p > n
86
- - [ ] **Validate**: Test on 20 tasks with 1000 vs 250 arc-gen examples.
87
- - Compare arc-gen survival. Accept only if >10% improvement.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
88
 
89
  ---
90
 
@@ -119,27 +248,23 @@
119
  - [ ] **Validate**: Run on all 400 models. Compare total score before/after.
120
  - Accept if total score improves by >2%.
121
 
122
- ### 4b: Best-of-N Model Selection
123
- - [ ] **For each task**: generate multiple candidates (different ks, bias/no-bias, PCA vs raw, etc.)
124
- - Keep cheapest valid one
125
- - [ ] **Validate**: Full 400 run. Compare total score vs single-candidate selection.
126
- - Accept if total score improves by >3%.
127
-
128
- ### 4c: Official Scoring Alignment
129
  - [ ] **Use `neurogolf_utils.score_network()`** β€” `onnx_tool` for exact cost matching
130
  - Our static profiler may diverge on edge cases
131
  - [ ] **Validate**: Compare static profiler vs onnx_tool on 50 random models.
132
  - Accept if divergence >5% and fix profiler.
133
 
 
 
134
  ---
135
 
136
  ## BLENDING β€” EXPLICITLY EXCLUDED
137
 
138
  > **User's competitive philosophy**: "I am writing my own models no blending. This is major flaw in the competition loophole."
139
 
140
- - [ ] ~~Blend pipeline~~ β€” **NOT DONE. Not our strategy.**
141
- - [ ] ~~Upload submission.zip as Kaggle dataset~~ β€” **NOT DONE.**
142
- - [ ] ~~Attach public datasets (24 sources)~~ β€” **NOT DONE.**
143
 
144
  Competitive intelligence on blending stays in LEARNING.md "What Others Do" section only.
145
 
@@ -149,9 +274,10 @@ Competitive intelligence on blending stays in LEARNING.md "What Others Do" secti
149
 
150
  | Date | Experiment | Tasks Tested | Result | Decision |
151
  |------|-----------|-------------|--------|----------|
152
- | 2026-04-24 | v4.2 baseline | 400 | 50 arc-gen, ~670 LB | Keep |
153
  | 2026-04-25 | v5 untested code | 10 | 3/10 FAILED arc-gen | **REVERTED** |
154
  | 2026-04-25 | LOOCV Ridge theory | 0 | Never tested β€” theory predicts failure | **NOT IMPLEMENTED** |
 
155
 
156
  ---
157
 
@@ -159,16 +285,20 @@ Competitive intelligence on blending stays in LEARNING.md "What Others Do" secti
159
 
160
  | Symbol | Meaning |
161
  |--------|---------|
162
- | `[ ]` | Not started β€” need research/design first |
163
  | `[~]` | In progress β€” experiment running |
164
  | `[x]` | Done β€” validated with arc-gen on β‰₯20 tasks, confirmed score increase |
165
  | `[!]` | Blocked β€” needs prerequisite or resource (e.g., GPU) |
166
  | `[-]` | Rejected β€” tested, did not improve arc-gen survival or score |
167
 
168
- ## Research Queue (Next 3 Papers to Read)
169
 
170
- 1. **arXiv:2302.00257** β€” "Benign overfitting in ridge regression..." (Lasso vs Ridge in sparse regimes)
171
- 2. **Belkin et al. (2019) PNAS** β€” "Reconciling modern machine-learning practice..." (double descent, interpolation threshold)
172
- 3. **CITE NEEDED** β€” ARC-AGI solver papers from NeurIPS 2024 / ICML 2024 workshops
 
 
 
 
173
 
174
  > Loop: Research β†’ Design β†’ Experiment β†’ Analyze β†’ Research β†’ ... until score increases.
 
1
  # NeuroGolf Solver β€” Roadmap
2
 
3
+ > Current: v5.0 Β· 50 arc-gen validated (v4 baseline) Β· ~670 LB Β· Target: 3000+
4
  > Philosophy: **Research β†’ Design β†’ Experiment β†’ Analyze β†’ Research** loop until confirmed score increase.
5
  > Rule: **NEVER claim a feature works without full arc-gen validation on representative tasks.**
6
+ > Updated: 2026-04-26 β€” Phase 2 overhauled with literature-backed experiment plan.
7
+
8
+ ---
9
 
10
  ## Phase 1: Cheap Wins (est +400 pts β†’ ~1100)
11
 
 
36
 
37
  ---
38
 
39
+ ## Phase 2: Fix Arc-Gen Survival β€” THE #1 BLOCKER
40
+
41
+ > **Status:** 307 solved locally, only 50 survive arc-gen. ~250 tasks affected by lstsq overfitting.
42
+ > **Root cause confirmed by literature:** catastrophic overfitting at the interpolation threshold (p β‰ˆ n).
43
+ > **Strategy:** Coordinated experiment sequence β€” each builds on the previous. Do NOT test in isolation.
44
+
45
+ ### The Problem (with numbers from conv.py)
46
+
47
+ Current `_lstsq_conv()` runs naked `np.linalg.lstsq(P, T_oh, rcond=None)` β€” zero regularization.
48
+ Kernel search is `[1, 3, 5, 7, 9, 11, ...]`, stops at **first ks that interpolates training**.
49
+
50
+ | Kernel | p (features) | n (patches, 7Γ—7 grid, 4 ex) | p/n | Regime |
51
+ |--------|-------------|------------------------------|-----|--------|
52
+ | ks=1 | 10 | 196 | 0.05 | βœ… Safe underparameterized |
53
+ | ks=3 | 90 | 196 | 0.46 | βœ… Underparameterized |
54
+ | **ks=5** | **250** | **196** | **1.27** | **❌ INTERPOLATION THRESHOLD** |
55
+ | **ks=7** | **490** | **196** | **2.50** | **❌ PAST THRESHOLD β€” WORST CASE** |
56
+ | **ks=9** | **810** | **196** | **4.13** | **⚠️ Near peak** |
57
+ | ks=11 | 1210 | 196 | 6.17 | Overparameterized |
58
+ | ks=29 | 8410 | 196 | 42.9 | Heavily overparameterized |
59
+
60
+ The solver accepts ks=5 (which perfectly interpolates training via minimum-norm solution) and **never tries ks=11+** which might actually generalize.
61
+
62
+ ### Literature Backing
63
+
64
+ | Paper | arxiv | Key Finding for Us |
65
+ |-------|-------|--------------------|
66
+ | Nakkiran et al. 2019 (NeurIPS) | `1912.02292` | Test error peaks at pβ‰ˆn (interpolation threshold). This is exactly where ks=5,7 sit. Skipping these eliminates the peak entirely. |
67
+ | Segert 2023 | `2311.11093` | Truncated SVD / PCA regression achieves flatter loss basins than Ridge. Optimal for low-rank covariance (our case: effective rank ~10-40). |
68
+ | Zhou & Ge 2023 (NeurIPS) | `2302.00257` | L1 (Lasso) achieves near-minimax for sparse Ξ²*. L2 (Ridge) cannot. ARC one-hot patches are sparse (3-5 of 10 channels active). |
69
+ | Liu et al. 2023 | `2302.01088` | More fitting rows help ONLY with regularization. Without it, adding rows near threshold can *hurt* (sample-wise non-monotonicity). |
70
+ | Ali et al. 2019 | β€” | GD with early stopping ≑ Ridge with Ξ»=1/(2t). Since Ridge is suboptimal here, GD early stopping is also suboptimal. |
71
+ | Liao & Gu 2024 (CompressARC) | `2512.06104` | ARC solvers that generalize use regularized fitting (MDL/KL), not ridgeless interpolation. Direct evidence regularization is needed. |
72
+
73
+ ### Coordinated Experiment Sequence
74
+
75
+ > Run in order. Each experiment keeps wins from previous experiments.
76
+ > **Goal: highest-scoring valid model per task**, not first-valid.
77
+
78
+ ---
79
+
80
+ #### Exp 0: Baseline Measurement ⬜
81
+ > *Prerequisite for all other experiments.*
82
+
83
+ - [ ] Run current v5 on all 400 tasks with full arc-gen validation
84
+ - [ ] Record per-task: (a) solver used, (b) ks chosen, (c) arc-gen pass/fail, (d) score, (e) p/n ratio
85
+ - [ ] Identify the ~250 tasks that fail arc-gen β€” classify by ks and p/n ratio
86
+ - **Exit criteria:** Have baseline numbers to compare against
87
+
88
+ ---
89
+
90
+ #### Exp 1: Skip ks=5,7,9 ⬜ β€” Confidence: **90%**
91
+ > *1 line change per solver. Eliminates the interpolation threshold peak entirely.*
92
+
93
+ **Evidence:** Nakkiran 2019 (`1912.02292`) proves test error peaks at pβ‰ˆn. Our ks=5 (p=250, nβ‰ˆ196) is textbook worst-case. Removing these kernel sizes cannot make things worse β€” if a task *needs* ks=5 to solve, it was going to fail arc-gen anyway because it's in the catastrophic regime.
94
+
95
+ **10% doubt:** Some tasks with large grids (21Γ—21, 16 examples β†’ nβ‰ˆ7056) have p/n < 1 even at ks=7. For those, ks=7 is safe. But the solver can't distinguish these cases without computing p/n per-task.
96
+
97
+ - [ ] Change ks list in all 4 conv solvers: `[1, 3, 5, 7, 9, 11, ...]` β†’ `[1, 3, 11, 13, 15, 17, 19, 21, 23, 25, 27, 29]`
98
+ - Files: `conv.py` β€” `solve_conv_fixed`, `solve_conv_variable`, `solve_conv_diffshape`, `solve_conv_var_diff`
99
+ - [ ] Run all 400 with arc-gen. Compare survival rate vs Exp 0.
100
+ - **Accept if:** arc-gen survival improves by β‰₯5 tasks
101
+ - **Expected:** +10-30 tasks
102
+
103
+ ---
104
+
105
+ #### Exp 2: Best-of-N Model Selection ⬜ β€” Confidence: **85%**
106
+ > *Structural change to solve loop. Return highest-scoring valid model, not first-valid.*
107
+
108
+ **Evidence:** Pure engineering β€” no theoretical uncertainty. Currently `conv.py` iterates ks smallest-first and returns the first that passes validation. This means if ks=3 AND ks=13 both pass arc-gen, we might return ks=13 (tried first due to bias loop) when ks=3 scores higher (lower MACs). Score = max(1, 25 - ln(cost)), so smaller models always score higher.
109
+
110
+ **15% doubt:** Runtime. Trying all 12 kernel sizes Γ— 2 bias options Γ— arc-gen validation = up to 24 candidates per task. May blow the 12hr Kaggle time budget. Mitigate: set tighter per-candidate timeout, parallelize validation.
111
+
112
+ - [ ] Refactor `solve_conv_*` to return **list of (model, ks, cost)** candidates instead of first-valid
113
+ - [ ] Refactor `solve_task` to collect all candidates (analytical + all conv variants), pick cheapest valid
114
+ - [ ] Add static cost estimation to pick cheapest before saving
115
+ - [ ] Run all 400. Compare total score vs Exp 1.
116
+ - **Accept if:** total score improves by β‰₯3% on existing solved tasks
117
+ - **Expected:** +5-15 score points on tasks already solved (better model selection)
118
+
119
+ ---
120
+
121
+ #### Exp 3: PCA / Truncated SVD Before lstsq ⬜ β€” Confidence: **75%**
122
+ > *~30 lines in `_lstsq_conv()`. Maps every ks into effective underparameterized regime.*
123
+
124
+ **Evidence:** Segert 2023 (`2311.11093`) β€” PCA regression is provably better than Ridge for low-rank covariance. Our patch covariance has effective rank ~10-40 (few active colors in one-hot encoding). Truncating to top-k components removes the ~(p-40) pure noise dimensions that ridgeless lstsq amplifies into catastrophic overfitting.
125
+
126
+ **25% doubt:** The variance threshold (99%) might be wrong for some tasks. Too aggressive truncation (k too small) kills signal. Too little (k too large) doesn't fix the problem. Need per-task adaptive k based on singular value gap.
127
+
128
+ **Implementation:**
129
+ ```python
130
+ def _lstsq_pcr(P, T_oh, var_threshold=0.99):
131
+ U, s, Vt = np.linalg.svd(P, full_matrices=False)
132
+ cumvar = np.cumsum(s**2) / np.sum(s**2)
133
+ k = np.searchsorted(cumvar, var_threshold) + 1
134
+ k = max(k, 5) # floor: keep at least 5 components
135
+ P_red = U[:, :k] * s[:k] # n Γ— k, always k << n
136
+ w_red = np.linalg.lstsq(P_red, T_oh, rcond=None)[0]
137
+ w_full = Vt[:k].T @ w_red # back to p-dim for ONNX weights
138
+ return w_full
139
+ ```
140
+
141
+ - [ ] Add `_lstsq_pcr()` to `conv.py` alongside existing `_lstsq_conv()`
142
+ - [ ] Use PCA path for all ks where p/n > 0.5 (safe margin below threshold)
143
+ - [ ] Keep raw lstsq for ks=1,3 where p << n (PCA unnecessary, adds cost)
144
+ - [ ] Try var_threshold in {0.95, 0.99, 0.999} β€” pick best per arc-gen survival
145
+ - [ ] Run all 400. Compare survival rate vs Exp 1.
146
+ - **Accept if:** arc-gen survival improves by β‰₯10 tasks vs Exp 1
147
+ - **Expected:** +15-40 tasks (the big win β€” makes previously-impossible ks usable)
148
+
149
+ ---
150
+
151
+ #### Exp 4: Increase Arc-Gen Fitting Cap ⬜ β€” Confidence: **60%**
152
+ > *1 line change. Only works WITH regularization (PCA) in place.*
153
+
154
+ **Evidence:** Liu et al. 2023 (`2302.01088`) β€” more fitting rows help only in regularized regime. With PCA, more rows = more constraints in the reduced k-dimensional space = better conditioning. Without PCA, adding rows to underdetermined lstsq doesn't fix the fundamental problem.
155
+
156
+ **40% doubt:** Arc-gen examples may have different grid sizes (filtered out by `get_exs_for_fitting`). The cap increase only helps if enough same-shape arc-gen examples exist.
157
+
158
+ - [ ] Change `get_exs_for_fitting()`: cap 10 β†’ 50
159
+ - [ ] Change `get_exs_for_fitting_variable()`: cap 20 β†’ 100
160
+ - [ ] Run all 400. Compare vs Exp 3.
161
+ - **Accept if:** arc-gen survival improves by β‰₯3 tasks vs Exp 3
162
+ - **Expected:** +5-15 tasks (modest but free)
163
+
164
+ ---
165
+
166
+ #### Exp 5: Lasso (L1) for Large Kernels ⬜ β€” Confidence: **55%**
167
+ > *~15 lines + sklearn dependency. Better than L2 for sparse signals, but slower and fragile.*
168
+
169
+ **Evidence:** Zhou & Ge 2023 (`2302.00257`) β€” L1 achieves near-minimax O(σ²·sΒ·log(d/s)/n) for sparse Ξ²*, vs Ridge's Ξ©(β€–Ξ²*β€–Β²). ARC one-hot patches are sparse (3-5 of 10 channels active). Segert 2023 Section A.10: Lasso competitive with PCA/Nuclear in sparse regime, wins ~40% of cases.
170
+
171
+ **45% doubt:** (1) Lasso is slow β€” coordinate descent vs single SVD. (2) `alpha` tuning via CV with only 4-6 training examples is fragile. (3) Need `MultiTaskLassoCV` for 10-column one-hot target, not scalar `LassoCV`. (4) sklearn adds a dependency (not available on Kaggle by default β€” need to verify).
172
+
173
+ - [ ] Add `_lstsq_lasso()` using `sklearn.linear_model.MultiTaskLassoCV`
174
+ - [ ] Use Lasso only for ksβ‰₯11 where p > n even after PCA (complement, not replacement)
175
+ - [ ] Verify sklearn available on Kaggle runtime
176
+ - [ ] Run all 400. Compare vs Exp 3+4.
177
+ - **Accept if:** arc-gen survival improves by β‰₯5 tasks vs Exp 3+4
178
+ - **Expected:** +5-10 tasks (incremental over PCA)
179
+
180
+ ---
181
+
182
+ #### Exp 6 [DEPRIORITIZED]: GD with Early Stopping ⬜ β€” Confidence: **40%**
183
+ > *Moved to backlog. Literature shows GD early stopping ≑ Ridge (Ali et al. 2019), which is suboptimal for our low-rank regime. Only revisit if Exp 1-5 plateau.*
184
+
185
+ - [ ] Only attempt if Exp 1-5 combined yield <80 arc-gen validated tasks
186
+ - [ ] If attempted: use `sklearn.linear_model.SGDRegressor` with `early_stopping=True`
187
+
188
+ ---
189
+
190
+ #### Exp 7 [DEPRIORITIZED]: PyTorch Multi-Seed (GPU Required) ⬜ β€” Confidence: **50%**
191
+ > *Needs GPU, slow, complex. Only after simpler fixes validated.*
192
+
193
+ - [!] Blocked on GPU availability
194
+ - [ ] Only attempt if Exp 1-5 combined yield <100 arc-gen validated tasks
195
+
196
+ ---
197
+
198
+ #### Exp 8 [DEPRIORITIZED]: Generate More ARC-GEN Data ⬜ β€” Confidence: **45%**
199
+ > *Only useful WITH regularization in place (Exp 3+). Without it, more rows can *hurt* (Nakkiran 2019 sample-wise non-monotonicity).*
200
+
201
+ - [ ] Only attempt after Exp 4 to see if cap increase helped
202
+ - [ ] If yes: generate 1000+ examples/task using ARC-GEN generator
203
+
204
+ ---
205
+
206
+ ### Phase 2 Combined Projection
207
+
208
+ | Scenario | Expected arc-gen tasks | LB estimate | Confidence |
209
+ |----------|----------------------|-------------|------------|
210
+ | Exp 1 alone (skip ks) | 60-80 | ~800-1000 | 90% |
211
+ | Exp 1+2 (skip + best-of-N) | 60-80 tasks, better scores | ~900-1100 | 85% |
212
+ | Exp 1+2+3 (+ PCA) | 90-130 | ~1200-1700 | 70% |
213
+ | Exp 1+2+3+4 (+ more arc-gen) | 100-140 | ~1400-1900 | 60% |
214
+ | Full stack 1-5 (+ Lasso) | 110-150 | ~1600-2200 | 50% |
215
+
216
+ **The big win is the Exp 1+2+3 stack.** Skip bad kernels, pick best model, PCA regularization. If those three work, we roughly double or triple the LB score.
217
 
218
  ---
219
 
 
248
  - [ ] **Validate**: Run on all 400 models. Compare total score before/after.
249
  - Accept if total score improves by >2%.
250
 
251
+ ### 4b: Official Scoring Alignment
 
 
 
 
 
 
252
  - [ ] **Use `neurogolf_utils.score_network()`** β€” `onnx_tool` for exact cost matching
253
  - Our static profiler may diverge on edge cases
254
  - [ ] **Validate**: Compare static profiler vs onnx_tool on 50 random models.
255
  - Accept if divergence >5% and fix profiler.
256
 
257
+ > **Note:** Best-of-N model selection moved to Phase 2 Exp 2 β€” it's part of the core overfitting fix, not just optimization.
258
+
259
  ---
260
 
261
  ## BLENDING β€” EXPLICITLY EXCLUDED
262
 
263
  > **User's competitive philosophy**: "I am writing my own models no blending. This is major flaw in the competition loophole."
264
 
265
+ - ~~Blend pipeline~~ β€” **NOT DONE. Not our strategy.**
266
+ - ~~Upload submission.zip as Kaggle dataset~~ β€” **NOT DONE.**
267
+ - ~~Attach public datasets (24 sources)~~ β€” **NOT DONE.**
268
 
269
  Competitive intelligence on blending stays in LEARNING.md "What Others Do" section only.
270
 
 
274
 
275
  | Date | Experiment | Tasks Tested | Result | Decision |
276
  |------|-----------|-------------|--------|----------|
277
+ | 2026-04-24 | v4.2 baseline | 400 | 50 arc-gen, ~670 LB | Keep as baseline |
278
  | 2026-04-25 | v5 untested code | 10 | 3/10 FAILED arc-gen | **REVERTED** |
279
  | 2026-04-25 | LOOCV Ridge theory | 0 | Never tested β€” theory predicts failure | **NOT IMPLEMENTED** |
280
+ | 2026-04-26 | v5.0 refactor | TBD | Running on Kaggle | **AWAITING RESULTS** |
281
 
282
  ---
283
 
 
285
 
286
  | Symbol | Meaning |
287
  |--------|---------|
288
+ | `⬜` / `[ ]` | Not started β€” designed, ready to implement |
289
  | `[~]` | In progress β€” experiment running |
290
  | `[x]` | Done β€” validated with arc-gen on β‰₯20 tasks, confirmed score increase |
291
  | `[!]` | Blocked β€” needs prerequisite or resource (e.g., GPU) |
292
  | `[-]` | Rejected β€” tested, did not improve arc-gen survival or score |
293
 
294
+ ## Research Queue (Papers Read βœ… / To Read)
295
 
296
+ 1. βœ… **Nakkiran et al. 2019** (`1912.02292`) β€” Double descent, interpolation threshold peak at pβ‰ˆn
297
+ 2. βœ… **Segert 2023** (`2311.11093`) β€” Truncated SVD/PCA > Ridge for low-rank covariance
298
+ 3. βœ… **Zhou & Ge 2023** (`2302.00257`) β€” L1 near-minimax for sparse signals, L2 fails
299
+ 4. βœ… **Liu et al. 2023** (`2302.01088`) β€” More rows help only with regularization
300
+ 5. βœ… **Liao & Gu 2024** (`2512.06104`) β€” CompressARC: regularization enables ARC generalization
301
+ 6. βœ… **Ali et al. 2019** β€” GD early stopping ≑ Ridge (therefore suboptimal here)
302
+ 7. [ ] **ARC Prize 2025 Technical Report** (`2601.10904`) β€” competition landscape, top approaches
303
 
304
  > Loop: Research β†’ Design β†’ Experiment β†’ Analyze β†’ Research β†’ ... until score increases.