# NeuroGolf Solver v5 Builds minimal ONNX networks for ARC-AGI tasks. Modular Python package with opset 17, zero-cost Slice-based transforms. **Currently running on Kaggle — results pending.** ## Version History | Version | Date | Solved (local) | Arc-gen Validated | Est LB | Key Changes | |---------|------|----------------|-------------------|--------|-------------| | **v5** | **2026-04-26** | **TBD** | **TBD** | **TBD** | Refactored to package, opset 17, Slice-based flip/rotate (0 MACs), lstsq crash fix, tensor-based Pad & ReduceSum | | v4.3 | 2026-04-25 | 307 | 50 | ~670 | Methodology docs, no code changes | | v4.0 | 2026-04-24 | 307 | 50 | ~656 | ARC-GEN validation, static profiler | | v3 | 2026-04-24 | 307 | ~40 | 501 | concat_enhanced, varshape_spatial_gather | | v2 | prior | 294 | — | — | Spatial_gather, variable-shape conv | | v1 | prior | 128 | — | — | Conv solver only | ## Project Structure ``` neurogolf_solver/ # Python package (v5) ├── __init__.py # Package marker ├── config.py # Runtime config (providers, opset) ├── constants.py # All constants (grid dims, excluded tasks, limits) ├── data_loader.py # Task loading, one-hot encoding, example extraction ├── gather_helpers.py # Gather-based ONNX model builders ├── main.py # Entry point with W&B init ├── onnx_helpers.py # Opset 17 builders (Slice, Pad, ReduceSum, mk) ├── profiler.py # Static cost profiler (fallback for onnx_tool) ├── submission.py # run_tasks with W&B logging, zip/csv generation ├── validators.py # Model validation against train+test+arc-gen └── solvers/ ├── __init__.py # Exports solve_task, ANALYTICAL_SOLVERS ├── analytical.py # identity, constant, color_map, transpose ├── conv.py # lstsq conv solvers (fixed, variable, diffshape, var_diff) ├── geometric.py # flip, rotate, shift, crop, gravity ├── solver_registry.py # Solver ordering + solve_task orchestration └── tiling.py # tile, upscale, mirror, concat, spatial_gather neurogolf_solver.py # Legacy monolith (v4, kept for reference) neurogolf_utils.py # Official Kaggle scoring library ARC-GEN-100K.zip # 400 files × ~250 examples synthetic data neurogolf-2026-solver-notebooks.zip # 5 reference notebooks (LB 4000+) ``` ## Quick Start ```bash # Clone git clone https://huggingface.co/rogermt/neurogolf-solver cd neurogolf-solver # Install deps pip install numpy onnx onnxruntime # Get ARC data git clone --depth 1 https://github.com/fchollet/ARC-AGI.git # Run (local) python -m neurogolf_solver.main --data_dir ARC-AGI/data/training/ --output_dir submission --conv_budget 30 # Run (Kaggle) python -m neurogolf_solver.main --kaggle --data_dir /kaggle/input/competitions/neurogolf-2026/ --output_dir /kaggle/working/submission --conv_budget 60 # With ARC-GEN data and W&B logging python -m neurogolf_solver.main --data_dir ARC-AGI/data/training/ --arcgen_dir ARC-GEN-100K/ --output_dir submission --use_wandb ``` ## Parameters | Flag | Default | Description | |------|---------|-------------| | `--data_dir` | `ARC-AGI/data/training/` | Path to task JSONs | | `--arcgen_dir` | `` | Path to ARC-GEN-100K/ directory | | `--output_dir` | `/kaggle/working/submission` | Where to save .onnx files | | `--kaggle` | off | Use Kaggle task format (task001.json with embedded arc-gen) | | `--conv_budget` | `30` | Seconds per task for conv solver | | `--tasks` | all | Comma-separated task numbers (e.g., `1,2,3`) | | `--device` | `auto` | `auto`, `cpu`, or `cuda` | | `--use_wandb` | off | Enable W&B logging | ## How It Works **Format:** Input/output = `[1, 10, 30, 30]` one-hot float32. ONNX opset 17, IR version 8. **Solver pipeline (in order):** 1. **Analytical solvers** (instant, near-zero cost): identity → constant → color_map → transpose → flip → rotate → shift → tile → upscale → kronecker → nonuniform_scale → mirror_h → mirror_v → quad_mirror → concat → concat_enhanced → diagonal_tile → fixed_crop → spatial_gather → varshape_spatial_gather 2. **Conv solvers** (learned via least-squares, validated against arc-gen): - `conv_fixed` — Slice → Conv → ArgMax → Equal+Cast → Pad - `conv_variable` — Conv(30×30) → ArgMax → Equal+Cast → Mul(mask) - `conv_diffshape` — Slice → Conv → Slice(crop) → ArgMax → Equal+Cast → Pad - `conv_var_diff` — Conv(30×30) → ArgMax → Equal+Cast → Mul(input_mask) ## v5 Changes from v4 | Change | Impact | |--------|--------| | **Opset 10 → 17, IR 10 → 8** | Enables Slice(step=-1) for zero-cost transforms | | **s_flip: Slice(step=-1)** | 0 MACs (was ~165K with Gather) | | **s_rotate k=2: double Slice reverse** | 0 MACs (was ~165K) | | **s_rotate k=1,3 (square): Slice+Transpose** | 0 MACs (was ~165K) | | **All Pad nodes: tensor-based pads input** | Required for opset 17 compatibility | | **All ReduceSum nodes: axes as tensor input** | Required for opset 13+ compatibility | | **lstsq crash fix: try/except LinAlgError** | Prevents SVD non-convergence crash (task 313) | | **Refactored to 16-file package** | Maintainable, testable, no more monolith | ## Scoring ``` Score per task = max(1.0, 25.0 - ln(MACs + memory_bytes + params)) ``` - Analytical solvers (Slice/Transpose/Gather) → near-zero cost → ~20-25 pts - Conv solvers → cost proportional to kernel size → ~7-14 pts - Unsolved → 1.0 pt minimum ## Competition Rules | Item | Value | |------|-------| | Input/Output | float32 `[1,10,30,30]` one-hot | | Opset | 10 or 17 (both accepted on Kaggle) | | Max file size | 1.44 MB per model | | Banned ops | Loop, Scan, NonZero, Unique, Script, Function | | Excluded tasks | {21, 55, 80, 184, 202, 366} | | Validation | Models checked against train + test + arc-gen (ALL splits) | ## Key Docs - **SKILL.md** — Competition rules, architecture, methodology, checklist - **LEARNING.md** — All mistakes, research findings, what works/doesn't - **TODO.md** — Roadmap, experiment queue, status tracking ## Strategy We build our own solver. No blending. No public datasets. See LEARNING.md for competitive intelligence on what others do (for awareness only). ## Repo https://huggingface.co/rogermt/neurogolf-solver