rogermt commited on
Commit
b48dd06
·
verified ·
1 Parent(s): f31e44d

Upload 47 files

Browse files
Files changed (47) hide show
  1. LEARNING.md +20 -0
  2. README.md +21 -0
  3. README_PEMF.md +40 -0
  4. SKILLS/README_SKILLS.md +2 -0
  5. SKILLS/SKILL.md +23 -0
  6. SKILLS/arc_agi_project/references/examples/minimal_runner.py +25 -0
  7. SKILLS/arc_agi_project/references/howto.md +8 -0
  8. SKILLS/code_reviewer/references/examples/review_checks.py +32 -0
  9. SKILLS/code_reviewer/references/howto.md +13 -0
  10. SKILLS/experiment_runner/references/examples/entrypoint_example.py +18 -0
  11. SKILLS/experiment_runner/references/howto.md +8 -0
  12. SKILLS/logging_observability/references/examples/postprocess_logs.py +27 -0
  13. SKILLS/logging_observability/references/howto.md +8 -0
  14. SKILLS/transform_designer/references/examples/transforms_demo.py +27 -0
  15. SKILLS/transform_designer/references/howto.md +8 -0
  16. SKILLS/unit_tester/references/examples/test_templates.py +21 -0
  17. SKILLS/unit_tester/references/howto.md +9 -0
  18. TODO.md +26 -0
  19. experiments/example1_20260428T172250Z_logs.json +1 -0
  20. experiments/example1_20260428T172250Z_phi_best.npy +3 -0
  21. experiments/example1_20260428T172250Z_result.json +23 -0
  22. experiments/example1_20260428T172311Z_logs.json +1 -0
  23. experiments/example1_20260428T172311Z_phi_best.npy +3 -0
  24. experiments/example1_20260428T172311Z_result.json +21 -0
  25. experiments/results.csv +5 -0
  26. experiments_analysis.py +159 -0
  27. itt_solver/README.md.md +19 -0
  28. itt_solver/__init__.py +10 -0
  29. itt_solver/__pycache__/__init__.cpython-312.pyc +0 -0
  30. itt_solver/__pycache__/beam_logging.cpython-312.pyc +0 -0
  31. itt_solver/__pycache__/experiment_driver.cpython-312.pyc +0 -0
  32. itt_solver/__pycache__/gates.cpython-312.pyc +0 -0
  33. itt_solver/__pycache__/layer_minus_one.cpython-312.pyc +0 -0
  34. itt_solver/__pycache__/solver_core.cpython-312.pyc +0 -0
  35. itt_solver/__pycache__/tests.cpython-312.pyc +0 -0
  36. itt_solver/__pycache__/transforms.cpython-312.pyc +0 -0
  37. itt_solver/__pycache__/wandb_runner.cpython-312.pyc +0 -0
  38. itt_solver/beam_logging.py +142 -0
  39. itt_solver/experiment_driver.py +112 -0
  40. itt_solver/gates.py +51 -0
  41. itt_solver/layer_minus_one.py +33 -0
  42. itt_solver/solver_core.py +136 -0
  43. itt_solver/tests.py +297 -0
  44. itt_solver/transforms.py +50 -0
  45. itt_solver/viz.py +30 -0
  46. itt_solver/wandb_runner.py +117 -0
  47. scripts/fix_and_inspect_logs.py +107 -0
LEARNING.md ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Lessons Learned (Critical Notes)
2
+
3
+ ## Core lessons
4
+ - **Log the candidate arrays early.** Without the actual candidate field, debugging accepted candidates is guesswork. Always include a small, quantized snapshot in logs for reproducibility.
5
+ - **Avoid side effects inside the solver run.** External logging (W&B) must be optional and executed outside the core run to avoid recursion and hidden failures.
6
+ - **Transforms must produce visible edits.** If transforms are no‑ops for typical inputs, the beam cannot reduce residue. Unit‑test transforms with small inputs.
7
+ - **Coerce logged gate values to booleans.** Strings like `"True"` break automated gate analysis and mislead debugging.
8
+ - **Quantize and resize consistently.** Residue must be computed on the same quantized and tiled array the beam evaluates (use `np.rint` and `tile_transform`).
9
+
10
+ ## Debugging checklist
11
+ 1. Confirm `experiments/` contains `*_phi_best.npy` and `*_logs.json`.
12
+ 2. Run `experiments/postprocess_logs.py` to coerce gates and attach candidate snapshot.
13
+ 3. Inspect `logs[0]` for `candidate_array` and recomputed residue.
14
+ 4. Run `itt_solver.tests.run_atomic_effects()` to verify transforms change the input.
15
+ 5. Run a relaxed smoke beam (lock_coeff=0, max_fraction=1.0, beam_width≥6) and inspect `sigmas`.
16
+
17
+ ## Practical tips
18
+ - After editing package files, **clear Python module cache** or restart the interpreter.
19
+ - Keep `candidate_array` optional in logs to avoid huge files; include it for debugging runs.
20
+ - Use small, deterministic transforms for initial debugging (Rotate, Reflect, ShiftedTile).
README.md ADDED
@@ -0,0 +1,21 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ## Pre‑Emergence Mechanics Framework (PEMF) — ARC‑AGI
2
+
3
+ Short summary
4
+ The Pre‑Emergence Mechanics Framework (PEMF) frames ARC tasks as a boundary‑constrained field problem solved by minimizing irreducible residue (o) under writability gates. PEMF implements four core primitives — **Scalar Potential (+)**, **Gradient Ordering (V)**, **Residue (o)**, and **Boundary Charge (p_q)** — and composes atomic transforms (tile, shifted tile, fill_enclosed, rotate, reflect, etc.) in a beam search to drain residue and produce stable outputs.
5
+
6
+ Quick verification
7
+ - Run the PEMF example to verify primitives and a tiny compositional loop:
8
+ ```bash
9
+ python SKILLS/pre_emergence_mechanics_framework/references/examples/verify_pemf.py
10
+ ```
11
+ - Run a single experiment (example):
12
+ ```bash
13
+ python scripts/entrypoint.py --task example1 --out_dir experiments
14
+ ```
15
+ - Postprocess logs to attach candidate snapshot and coerce gates:
16
+ ```bash
17
+ python experiments/postprocess_logs.py
18
+ ```
19
+
20
+ Where to find skill artifacts
21
+ - `SKILLS/pre_emergence_mechanics_framework/` — howto, runnable example `references/examples/verify_pemf.py`, and README for the skill.
README_PEMF.md ADDED
@@ -0,0 +1,40 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Pre‑Emergence Mechanics Framework (PEMF) — ARC‑AGI
2
+
3
+ Short summary
4
+ The Pre‑Emergence Mechanics Framework (PEMF) frames ARC tasks as a boundary‑constrained field problem solved by minimizing irreducible residue (o) under writability gates. PEMF implements four core primitives — **Scalar Potential (+)**, **Gradient Ordering (V)**, **Residue (o)**, and **Boundary Charge (p_q)** — and composes atomic transforms (tile, shifted tile, fill_enclosed, rotate, reflect, etc.) in a beam search to drain residue and produce stable outputs.
5
+
6
+ Why this matters
7
+ PEMF shows how ARC tasks can be solved mechanically (o‑minimization + gates) rather than by symbolic heuristics. The approach maps CTS/ITT primitives to executable operators (potential fields, gradients, Dirichlet masks, complex projections) and yields a reproducible solver recipe.
8
+
9
+ Key concepts (one line each)
10
+ - **Scalar Potential (+):** represent grid as numeric potential field (initialize_potential).
11
+ - **Gradient Ordering (V):** discrete gradients direct admissible edits.
12
+ - **Residue (o):** L1 misalignment after quantize+tile; objective to minimize.
13
+ - **Boundary Charge (p_q):** Dirichlet boundary mask that enforces writability gates.
14
+ - **Layer‑1 diagnostics:** complex projection (FFT imag component) to find latent edit zones when real signal is weak.
15
+
16
+ Files and examples
17
+ - **Skill artifacts:** `SKILLS/pre_emergence_mechanics_framework/` — howto, runnable example `references/examples/verify_pemf.py`, and README for the skill.
18
+ - **Postprocess logs:** `experiments/postprocess_logs.py` — coerce gate booleans and attach candidate snapshots for offline inspection.
19
+ - **Headless entry:** `scripts/entrypoint.py` — run experiments from CLI; `--use_wandb` flag is optional and defaults to off.
20
+
21
+ Quick verification (headless)
22
+ 1. Run the PEMF example to verify primitives and a tiny compositional loop:
23
+ ```bash
24
+ python SKILLS/pre_emergence_mechanics_framework/references/examples/verify_pemf.py
25
+ ```
26
+ 2. Run a single experiment (example):
27
+ ```bash
28
+ python scripts/entrypoint.py --task example1 --out_dir experiments
29
+ ```
30
+ 3. Postprocess logs to attach candidate snapshot and coerce gates:
31
+ ```bash
32
+ python experiments/postprocess_logs.py
33
+ ```
34
+
35
+ Acceptance checks
36
+ - `verify_pemf.py` prints a residue trace and reports at least one admissible edit zone from the complex projection.
37
+ - `experiments/*_phi_best.npy` and `experiments/*_logs.fixed.json` exist after a run and contain candidate snapshot and boolean gates for inspection.
38
+
39
+ References and provenance
40
+ This README summarizes the executable PEMF recipe derived from the ARC‑AGI exposition (PEMF / CTS / ITT). See `SKILLS/pre_emergence_mechanics_framework/references/` for runnable examples and a step‑by‑step how‑to.
SKILLS/README_SKILLS.md ADDED
@@ -0,0 +1,2 @@
 
 
 
1
+ SKILLS directory: central place for skill documentation and runnable examples.
2
+ Use SKILL.md to find the skill you need, then open the corresponding references/examples.
SKILLS/SKILL.md ADDED
@@ -0,0 +1,23 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # SKILL Index (project root)
2
+
3
+ Purpose
4
+ - Central index of skills and reference artifacts for developers and agents.
5
+ - Each skill has a short description here; larger skills include a `references/` folder with howto and runnable examples.
6
+
7
+ Top-level skills (descriptive names)
8
+ - code_reviewer — automated review heuristics and safe-change checklist.
9
+ - unit_tester — unit test design, fixtures, and templates.
10
+ - arc_agi_project — domain knowledge and reproducible checklist for ARC‑AGI.
11
+ - transform_designer — design, test, and verify transforms.
12
+ - logging_observability — logging, postprocessing, and headless diagnostics.
13
+ - experiment_runner — headless entrypoint patterns and safe external logging.
14
+
15
+ How to use
16
+ - Small skills: keep the full recipe in SKILL.md under the skill title.
17
+ - Larger skills: open `SKILLS/<skill>/references/howto.md` and `SKILLS/<skill>/references/examples/*` for runnable examples.
18
+ - Add new skills by creating `SKILLS/<skill>/references/` and adding `howto.md` + examples.
19
+
20
+ Verification principle
21
+ - Each skill must include at least one deterministic example and one verification step.
22
+
23
+ - pre_emergence_mechanics_framework — PEMF skill (ARC‑AGI primitives and runnable examples).
SKILLS/arc_agi_project/references/examples/minimal_runner.py ADDED
@@ -0,0 +1,25 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env python3
2
+ import numpy as np
3
+
4
+ def tile_transform(phi, out_shape):
5
+ a = np.array(phi)
6
+ h_out, w_out = out_shape
7
+ h_in, w_in = a.shape
8
+ reps_h = (h_out + h_in - 1) // h_in
9
+ reps_w = (w_out + w_in - 1) // w_in
10
+ tiled = np.tile(a, (reps_h, reps_w))
11
+ return tiled[:h_out, :w_out]
12
+
13
+ def quantize(phi):
14
+ return np.rint(phi).astype(int)
15
+
16
+ INPUT = np.array([[0,7,7],[7,7,7],[0,7,7]])
17
+ TARGET = np.zeros((9,9), dtype=int)
18
+
19
+ def rotate90(phi):
20
+ return np.rot90(phi, 1)
21
+
22
+ if __name__ == '__main__':
23
+ cand = rotate90(INPUT)
24
+ cand_for_target = tile_transform(cand, TARGET.shape)
25
+ print("Residue:", int((quantize(cand_for_target) != TARGET).sum()))
SKILLS/arc_agi_project/references/howto.md ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ # ARC-AGI Project — How To
2
+
3
+ Purpose
4
+ - Domain checklist and minimal reproducible runner for ARC tasks.
5
+
6
+ Key points
7
+ - Tasks are small grids; scoring is L1 after quantize+tile.
8
+ - Always attach candidate snapshots for debugging.
SKILLS/code_reviewer/references/examples/review_checks.py ADDED
@@ -0,0 +1,32 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env python3
2
+ # Lightweight static check: find suspicious top-level calls
3
+ import ast, os, json, sys
4
+
5
+ def suspicious_top_level_calls(path):
6
+ with open(path, 'r', encoding='utf-8') as fh:
7
+ src = fh.read()
8
+ tree = ast.parse(src)
9
+ for node in tree.body:
10
+ if isinstance(node, ast.Expr) and isinstance(node.value, ast.Call):
11
+ # ignore docstring
12
+ if not (isinstance(node.value.func, ast.Name) and node.value.func.id == '__doc__'):
13
+ return True
14
+ return False
15
+
16
+ def run_check(root='.'):
17
+ issues = []
18
+ for dirpath, _, files in os.walk(root):
19
+ if '.git' in dirpath or 'venv' in dirpath or 'SKILLS' in dirpath:
20
+ continue
21
+ for f in files:
22
+ if f.endswith('.py'):
23
+ p = os.path.join(dirpath, f)
24
+ try:
25
+ if suspicious_top_level_calls(p):
26
+ issues.append(p)
27
+ except Exception:
28
+ pass
29
+ print(json.dumps({'suspicious_files': issues}, indent=2))
30
+
31
+ if __name__ == '__main__':
32
+ run_check()
SKILLS/code_reviewer/references/howto.md ADDED
@@ -0,0 +1,13 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Code Reviewer — How To
2
+
3
+ Purpose
4
+ - Checklist and lightweight automated checks to ensure safe, minimal changes.
5
+
6
+ Quick checklist
7
+ - No side effects at import time.
8
+ - Avoid circular imports; prefer lazy imports.
9
+ - Logging must be JSON serializable; gate values must be booleans.
10
+ - Add tests for any change.
11
+
12
+ Verification
13
+ - Run the example script in references/examples/review_checks.py.
SKILLS/experiment_runner/references/examples/entrypoint_example.py ADDED
@@ -0,0 +1,18 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env python3
2
+ import argparse, json, os
3
+ def main():
4
+ parser = argparse.ArgumentParser()
5
+ parser.add_argument('--task', required=True)
6
+ parser.add_argument('--out_dir', default='experiments')
7
+ parser.add_argument('--use_wandb', action='store_true')
8
+ args = parser.parse_args()
9
+ os.makedirs(args.out_dir, exist_ok=True)
10
+ import numpy as np
11
+ phi = np.zeros((9,9))
12
+ np.save(os.path.join(args.out_dir, f"{args.task}_phi_best.npy"), phi)
13
+ logs = [[{'accepted': True, 'atomic': '<Demo>', 'residue': 0, 'gates': {'A': True}}]]
14
+ with open(os.path.join(args.out_dir, f"{args.task}_logs.json"), 'w') as fh:
15
+ json.dump(logs, fh)
16
+ print("Wrote artifacts to", args.out_dir)
17
+ if __name__ == '__main__':
18
+ main()
SKILLS/experiment_runner/references/howto.md ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ # Experiment Runner — How To
2
+
3
+ Purpose
4
+ - Headless entrypoint patterns and safe external logging.
5
+
6
+ Guidelines
7
+ - Keep W&B optional and external to core run.
8
+ - Provide CLI entrypoint with --use_wandb default false.
SKILLS/logging_observability/references/examples/postprocess_logs.py ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env python3
2
+ import glob, json, numpy as np
3
+
4
+ def coerce_gates(g):
5
+ out = {}
6
+ for k,v in (g or {}).items():
7
+ if isinstance(v, str):
8
+ lv = v.strip().lower()
9
+ out[k] = lv in ('true','1','yes')
10
+ else:
11
+ out[k] = bool(v)
12
+ return out
13
+
14
+ logs_path = sorted(glob.glob('experiments/*_logs.json'))[-1]
15
+ phi_path = sorted(glob.glob('experiments/*_phi_best.npy'))[-1]
16
+ logs = json.load(open(logs_path))
17
+ phi = np.load(phi_path)
18
+
19
+ for entry in logs[0]:
20
+ entry['gates'] = coerce_gates(entry.get('gates'))
21
+ if entry.get('accepted') and 'candidate_array' not in entry:
22
+ entry['candidate_array'] = phi.tolist()
23
+
24
+ fixed = logs_path.replace('_logs.json','_logs.fixed.json')
25
+ with open(fixed,'w') as fh:
26
+ json.dump(logs, fh, indent=2)
27
+ print("Wrote", fixed)
SKILLS/logging_observability/references/howto.md ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ # Logging & Observability — How To
2
+
3
+ Purpose
4
+ - How to log experiments for offline inspection and headless operation.
5
+
6
+ Guidelines
7
+ - Logs must be JSON serializable and compact.
8
+ - Candidate snapshots optional; include when debugging.
SKILLS/transform_designer/references/examples/transforms_demo.py ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env python3
2
+ import numpy as np
3
+
4
+ def tile_transform(phi, out_shape):
5
+ a = np.array(phi)
6
+ h_out, w_out = out_shape
7
+ h_in, w_in = a.shape
8
+ reps_h = (h_out + h_in - 1) // h_in
9
+ reps_w = (w_out + w_in - 1) // w_in
10
+ tiled = np.tile(a, (reps_h, reps_w))
11
+ return tiled[:h_out, :w_out]
12
+
13
+ class ShiftedTile:
14
+ def __init__(self, shift=(1,1), factor=3):
15
+ self.shift = shift
16
+ self.factor = factor
17
+ def apply(self, phi):
18
+ h,w = phi.shape
19
+ tiled = tile_transform(phi, (h*self.factor, w*self.factor))
20
+ return np.roll(tiled, shift=self.shift, axis=(0,1))
21
+
22
+ if __name__ == '__main__':
23
+ inp = np.array([[0,7,7],[7,7,7],[0,7,7]])
24
+ T = ShiftedTile()
25
+ out = T.apply(inp)
26
+ out_resized = tile_transform(out, inp.shape) if out.shape != inp.shape else out
27
+ print("Changed cells:", int((out_resized != inp).sum()))
SKILLS/transform_designer/references/howto.md ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ # Transform Designer — How To
2
+
3
+ Purpose
4
+ - Design deterministic, testable transforms useful to the beam.
5
+
6
+ Guidelines
7
+ - Transforms must be pure functions with documented shape semantics.
8
+ - Add a small shift for tiled transforms to avoid no-ops.
SKILLS/unit_tester/references/examples/test_templates.py ADDED
@@ -0,0 +1,21 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env python3
2
+ import numpy as np
3
+
4
+ def tile_transform(phi, out_shape):
5
+ a = np.array(phi)
6
+ h_out, w_out = out_shape
7
+ h_in, w_in = a.shape
8
+ reps_h = (h_out + h_in - 1) // h_in
9
+ reps_w = (w_out + w_in - 1) // w_in
10
+ tiled = np.tile(a, (reps_h, reps_w))
11
+ return tiled[:h_out, :w_out]
12
+
13
+ def test_tile_transform():
14
+ inp = np.array([[1,2],[3,4]])
15
+ out = tile_transform(inp, (3,3))
16
+ assert out.shape == (3,3)
17
+ assert out[0,0] == 1
18
+
19
+ if __name__ == '__main__':
20
+ test_tile_transform()
21
+ print("unit_tester example passed")
SKILLS/unit_tester/references/howto.md ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ # Unit Tester — How To
2
+
3
+ Purpose
4
+ - Design deterministic, fast unit tests for core primitives.
5
+
6
+ Recipe
7
+ - Use small canonical fixtures.
8
+ - Assert exact arrays or counts.
9
+ - Mock external services.
TODO.md ADDED
@@ -0,0 +1,26 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+
3
+ #### `TODO.md`
4
+ ```markdown
5
+ # TODO (Prioritised)
6
+
7
+ ## Immediate (blockers)
8
+ - **Add candidate snapshot to beam logs** (if not already present) so each accepted candidate can be inspected offline.
9
+ - **Ensure gate values are booleans** in logs (coerce string values).
10
+ - **Make tile transform nontrivial** (shifted tiling) so transforms can change the field.
11
+ - **Implement robust fill_enclosed** (dependency‑free BFS hole filler).
12
+
13
+ ## Short term
14
+ - Add CLI entrypoint with `--use_wandb` flag (default: false).
15
+ - Add unit tests for `tile_transform`, `fill_enclosed`, and transform `.apply()` semantics.
16
+ - Add small visualization notebook for `phi_best`, diff maps, and Layer‑1 masks.
17
+
18
+ ## Medium term
19
+ - Improve Layer‑1 mask generation (percentile + min_abs + dilation).
20
+ - Add a toggle to include/exclude `candidate_array` in logs to control log size.
21
+ - Create a reproducible benchmark harness for parameter sweeps and CSV aggregation.
22
+
23
+ ## Long term
24
+ - Integrate a safe external W&B uploader (runs after experiments finish).
25
+ - Add more ARC tasks and automated evaluation harness.
26
+ - Document reproducibility steps and expected outputs for each example task.
experiments/example1_20260428T172250Z_logs.json ADDED
@@ -0,0 +1 @@
 
 
1
+ [[{"atomic": "<Transform tile_to_target>", "score": 98.0, "residue": 98.0, "energy": 2352.0, "gates": {"A_boundary": true, "B_localization": "True", "C_quantization": "True", "passed": "True"}, "accepted": true, "shape": [9, 9]}, {"atomic": "<Transform FillEnclosedHarmonic>", "score": 98.0, "residue": 98.0, "energy": 2352.0, "gates": {"A_boundary": true, "B_localization": "True", "C_quantization": "True", "passed": "True"}, "accepted": true, "shape": [9, 9]}, {"atomic": "<Transform Rotate_90>", "score": 98.0, "residue": 98.0, "energy": 2352.0, "gates": {"A_boundary": true, "B_localization": "True", "C_quantization": "True", "passed": "True"}, "accepted": true, "shape": [9, 9]}, {"atomic": "<Transform Reflect_h>", "score": 98.0, "residue": 98.0, "energy": 2352.0, "gates": {"A_boundary": true, "B_localization": "True", "C_quantization": "True", "passed": "True"}, "accepted": true, "shape": [9, 9]}], [{"atomic": "<Transform tile_to_target>", "score": 98.0, "residue": 98.0, "energy": 2352.0, "gates": {"A_boundary": true, "B_localization": "True", "C_quantization": "True", "passed": "True"}, "accepted": true, "shape": [9, 9]}, {"atomic": "<Transform FillEnclosedHarmonic>", "score": 98.0, "residue": 98.0, "energy": 2352.0, "gates": {"A_boundary": true, "B_localization": "True", "C_quantization": "True", "passed": "True"}, "accepted": true, "shape": [9, 9]}, {"atomic": "<Transform Rotate_90>", "score": 98.0, "residue": 98.0, "energy": 2352.0, "gates": {"A_boundary": true, "B_localization": "True", "C_quantization": "True", "passed": "True"}, "accepted": true, "shape": [9, 9]}, {"atomic": "<Transform Reflect_h>", "score": 98.0, "residue": 98.0, "energy": 2352.0, "gates": {"A_boundary": true, "B_localization": "True", "C_quantization": "True", "passed": "True"}, "accepted": true, "shape": [9, 9]}, {"atomic": "<Transform tile_to_target>", "score": 98.0, "residue": 98.0, "energy": 2352.0, "gates": {"A_boundary": true, "B_localization": "True", "C_quantization": "True", "passed": "True"}, "accepted": true, "shape": [9, 9]}, {"atomic": "<Transform FillEnclosedHarmonic>", "score": 98.0, "residue": 98.0, "energy": 2352.0, "gates": {"A_boundary": true, "B_localization": "True", "C_quantization": "True", "passed": "True"}, "accepted": true, "shape": [9, 9]}, {"atomic": "<Transform Rotate_90>", "score": 98.0, "residue": 98.0, "energy": 2352.0, "gates": {"A_boundary": true, "B_localization": "True", "C_quantization": "True", "passed": "True"}, "accepted": true, "shape": [9, 9]}, {"atomic": "<Transform Reflect_h>", "score": 98.0, "residue": 98.0, "energy": 2352.0, "gates": {"A_boundary": true, "B_localization": "True", "C_quantization": "True", "passed": "True"}, "accepted": true, "shape": [9, 9]}, {"atomic": "<Transform tile_to_target>", "score": 98.0, "residue": 98.0, "energy": 2352.0, "gates": {"A_boundary": true, "B_localization": "True", "C_quantization": "True", "passed": "True"}, "accepted": true, "shape": [9, 9]}, {"atomic": "<Transform FillEnclosedHarmonic>", "score": 98.0, "residue": 98.0, "energy": 2352.0, "gates": {"A_boundary": true, "B_localization": "True", "C_quantization": "True", "passed": "True"}, "accepted": true, "shape": [9, 9]}, {"atomic": "<Transform Rotate_90>", "score": 98.0, "residue": 98.0, "energy": 2352.0, "gates": {"A_boundary": true, "B_localization": "True", "C_quantization": "True", "passed": "True"}, "accepted": true, "shape": [9, 9]}, {"atomic": "<Transform Reflect_h>", "score": 98.0, "residue": 98.0, "energy": 2352.0, "gates": {"A_boundary": true, "B_localization": "True", "C_quantization": "True", "passed": "True"}, "accepted": true, "shape": [9, 9]}, {"atomic": "<Transform tile_to_target>", "score": 98.0, "residue": 98.0, "energy": 2352.0, "gates": {"A_boundary": true, "B_localization": "True", "C_quantization": "True", "passed": "True"}, "accepted": true, "shape": [9, 9]}, {"atomic": "<Transform FillEnclosedHarmonic>", "score": 98.0, "residue": 98.0, "energy": 2352.0, "gates": {"A_boundary": true, "B_localization": "True", "C_quantization": "True", "passed": "True"}, "accepted": true, "shape": [9, 9]}, {"atomic": "<Transform Rotate_90>", "score": 98.0, "residue": 98.0, "energy": 2352.0, "gates": {"A_boundary": true, "B_localization": "True", "C_quantization": "True", "passed": "True"}, "accepted": true, "shape": [9, 9]}, {"atomic": "<Transform Reflect_h>", "score": 98.0, "residue": 98.0, "energy": 2352.0, "gates": {"A_boundary": true, "B_localization": "True", "C_quantization": "True", "passed": "True"}, "accepted": true, "shape": [9, 9]}], [{"atomic": "<Transform tile_to_target>", "score": 98.0, "residue": 98.0, "energy": 2352.0, "gates": {"A_boundary": true, "B_localization": "True", "C_quantization": "True", "passed": "True"}, "accepted": true, "shape": [9, 9]}, {"atomic": "<Transform FillEnclosedHarmonic>", "score": 98.0, "residue": 98.0, "energy": 2352.0, "gates": {"A_boundary": true, "B_localization": "True", "C_quantization": "True", "passed": "True"}, "accepted": true, "shape": [9, 9]}, {"atomic": "<Transform Rotate_90>", "score": 98.0, "residue": 98.0, "energy": 2352.0, "gates": {"A_boundary": true, "B_localization": "True", "C_quantization": "True", "passed": "True"}, "accepted": true, "shape": [9, 9]}, {"atomic": "<Transform Reflect_h>", "score": 98.0, "residue": 98.0, "energy": 2352.0, "gates": {"A_boundary": true, "B_localization": "True", "C_quantization": "True", "passed": "True"}, "accepted": true, "shape": [9, 9]}, {"atomic": "<Transform tile_to_target>", "score": 98.0, "residue": 98.0, "energy": 2352.0, "gates": {"A_boundary": true, "B_localization": "True", "C_quantization": "True", "passed": "True"}, "accepted": true, "shape": [9, 9]}, {"atomic": "<Transform FillEnclosedHarmonic>", "score": 98.0, "residue": 98.0, "energy": 2352.0, "gates": {"A_boundary": true, "B_localization": "True", "C_quantization": "True", "passed": "True"}, "accepted": true, "shape": [9, 9]}, {"atomic": "<Transform Rotate_90>", "score": 98.0, "residue": 98.0, "energy": 2352.0, "gates": {"A_boundary": true, "B_localization": "True", "C_quantization": "True", "passed": "True"}, "accepted": true, "shape": [9, 9]}, {"atomic": "<Transform Reflect_h>", "score": 98.0, "residue": 98.0, "energy": 2352.0, "gates": {"A_boundary": true, "B_localization": "True", "C_quantization": "True", "passed": "True"}, "accepted": true, "shape": [9, 9]}, {"atomic": "<Transform tile_to_target>", "score": 98.0, "residue": 98.0, "energy": 2352.0, "gates": {"A_boundary": true, "B_localization": "True", "C_quantization": "True", "passed": "True"}, "accepted": true, "shape": [9, 9]}, {"atomic": "<Transform FillEnclosedHarmonic>", "score": 98.0, "residue": 98.0, "energy": 2352.0, "gates": {"A_boundary": true, "B_localization": "True", "C_quantization": "True", "passed": "True"}, "accepted": true, "shape": [9, 9]}, {"atomic": "<Transform Rotate_90>", "score": 98.0, "residue": 98.0, "energy": 2352.0, "gates": {"A_boundary": true, "B_localization": "True", "C_quantization": "True", "passed": "True"}, "accepted": true, "shape": [9, 9]}, {"atomic": "<Transform Reflect_h>", "score": 98.0, "residue": 98.0, "energy": 2352.0, "gates": {"A_boundary": true, "B_localization": "True", "C_quantization": "True", "passed": "True"}, "accepted": true, "shape": [9, 9]}, {"atomic": "<Transform tile_to_target>", "score": 98.0, "residue": 98.0, "energy": 2352.0, "gates": {"A_boundary": true, "B_localization": "True", "C_quantization": "True", "passed": "True"}, "accepted": true, "shape": [9, 9]}, {"atomic": "<Transform FillEnclosedHarmonic>", "score": 98.0, "residue": 98.0, "energy": 2352.0, "gates": {"A_boundary": true, "B_localization": "True", "C_quantization": "True", "passed": "True"}, "accepted": true, "shape": [9, 9]}, {"atomic": "<Transform Rotate_90>", "score": 98.0, "residue": 98.0, "energy": 2352.0, "gates": {"A_boundary": true, "B_localization": "True", "C_quantization": "True", "passed": "True"}, "accepted": true, "shape": [9, 9]}, {"atomic": "<Transform Reflect_h>", "score": 98.0, "residue": 98.0, "energy": 2352.0, "gates": {"A_boundary": true, "B_localization": "True", "C_quantization": "True", "passed": "True"}, "accepted": true, "shape": [9, 9]}, {"atomic": "<Transform tile_to_target>", "score": 98.0, "residue": 98.0, "energy": 2352.0, "gates": {"A_boundary": true, "B_localization": "True", "C_quantization": "True", "passed": "True"}, "accepted": true, "shape": [9, 9]}, {"atomic": "<Transform FillEnclosedHarmonic>", "score": 98.0, "residue": 98.0, "energy": 2352.0, "gates": {"A_boundary": true, "B_localization": "True", "C_quantization": "True", "passed": "True"}, "accepted": true, "shape": [9, 9]}, {"atomic": "<Transform Rotate_90>", "score": 98.0, "residue": 98.0, "energy": 2352.0, "gates": {"A_boundary": true, "B_localization": "True", "C_quantization": "True", "passed": "True"}, "accepted": true, "shape": [9, 9]}, {"atomic": "<Transform Reflect_h>", "score": 98.0, "residue": 98.0, "energy": 2352.0, "gates": {"A_boundary": true, "B_localization": "True", "C_quantization": "True", "passed": "True"}, "accepted": true, "shape": [9, 9]}, {"atomic": "<Transform tile_to_target>", "score": 98.0, "residue": 98.0, "energy": 2352.0, "gates": {"A_boundary": true, "B_localization": "True", "C_quantization": "True", "passed": "True"}, "accepted": true, "shape": [9, 9]}, {"atomic": "<Transform FillEnclosedHarmonic>", "score": 98.0, "residue": 98.0, "energy": 2352.0, "gates": {"A_boundary": true, "B_localization": "True", "C_quantization": "True", "passed": "True"}, "accepted": true, "shape": [9, 9]}, {"atomic": "<Transform Rotate_90>", "score": 98.0, "residue": 98.0, "energy": 2352.0, "gates": {"A_boundary": true, "B_localization": "True", "C_quantization": "True", "passed": "True"}, "accepted": true, "shape": [9, 9]}, {"atomic": "<Transform Reflect_h>", "score": 98.0, "residue": 98.0, "energy": 2352.0, "gates": {"A_boundary": true, "B_localization": "True", "C_quantization": "True", "passed": "True"}, "accepted": true, "shape": [9, 9]}]]
experiments/example1_20260428T172250Z_phi_best.npy ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:660ada98c4dfce4cdf016cac4f3432f7e589a0c758e0a74a97f5719f4972caee
3
+ size 776
experiments/example1_20260428T172250Z_result.json ADDED
@@ -0,0 +1,23 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "task_name": "example1",
3
+ "params": {
4
+ "beam_width": 6,
5
+ "max_depth": 3,
6
+ "lock_coeff": 0.0,
7
+ "max_fraction": 1.0,
8
+ "enable_layer_minus_one": true,
9
+ "boundary_source": "target",
10
+ "wandb_project": "itt_solver",
11
+ "wandb_anonymous": "allow"
12
+ },
13
+ "final_sigma": 98.0,
14
+ "sigma_trace": [
15
+ 98.0,
16
+ 98.0,
17
+ 98.0,
18
+ 98.0
19
+ ],
20
+ "time_s": 0.008741617202758789,
21
+ "transform": "<Transform Id\u2218tile_to_target\u2218tile_to_target\u2218tile_to_target>",
22
+ "states_count": 4
23
+ }
experiments/example1_20260428T172311Z_logs.json ADDED
@@ -0,0 +1 @@
 
 
1
+ [[{"atomic": "<Transform tile_to_target>", "score": 98.0, "residue": 98.0, "energy": 2352.0, "gates": {"A_boundary": true, "B_localization": "True", "C_quantization": "True", "passed": "True"}, "accepted": true, "shape": [9, 9]}, {"atomic": "<Transform FillEnclosedHarmonic>", "score": 98.0, "residue": 98.0, "energy": 2352.0, "gates": {"A_boundary": true, "B_localization": "True", "C_quantization": "True", "passed": "True"}, "accepted": true, "shape": [9, 9]}], [{"atomic": "<Transform tile_to_target>", "score": 98.0, "residue": 98.0, "energy": 2352.0, "gates": {"A_boundary": true, "B_localization": "True", "C_quantization": "True", "passed": "True"}, "accepted": true, "shape": [9, 9]}, {"atomic": "<Transform FillEnclosedHarmonic>", "score": 98.0, "residue": 98.0, "energy": 2352.0, "gates": {"A_boundary": true, "B_localization": "True", "C_quantization": "True", "passed": "True"}, "accepted": true, "shape": [9, 9]}, {"atomic": "<Transform tile_to_target>", "score": 98.0, "residue": 98.0, "energy": 2352.0, "gates": {"A_boundary": true, "B_localization": "True", "C_quantization": "True", "passed": "True"}, "accepted": true, "shape": [9, 9]}, {"atomic": "<Transform FillEnclosedHarmonic>", "score": 98.0, "residue": 98.0, "energy": 2352.0, "gates": {"A_boundary": true, "B_localization": "True", "C_quantization": "True", "passed": "True"}, "accepted": true, "shape": [9, 9]}]]
experiments/example1_20260428T172311Z_phi_best.npy ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:660ada98c4dfce4cdf016cac4f3432f7e589a0c758e0a74a97f5719f4972caee
3
+ size 776
experiments/example1_20260428T172311Z_result.json ADDED
@@ -0,0 +1,21 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "task_name": "example1",
3
+ "params": {
4
+ "beam_width": 4,
5
+ "max_depth": 2,
6
+ "lock_coeff": 0.0,
7
+ "max_fraction": 0.5,
8
+ "enable_layer_minus_one": true,
9
+ "boundary_source": "target",
10
+ "use_symmetry": false
11
+ },
12
+ "final_sigma": 98.0,
13
+ "sigma_trace": [
14
+ 98.0,
15
+ 98.0,
16
+ 98.0
17
+ ],
18
+ "time_s": 0.0020961761474609375,
19
+ "transform": "<Transform Id\u2218tile_to_target\u2218tile_to_target>",
20
+ "states_count": 3
21
+ }
experiments/results.csv ADDED
@@ -0,0 +1,5 @@
 
 
 
 
 
 
1
+ task_name,params,final_sigma,time_s,transform,sigma_trace
2
+ example1,"{""beam_width"": 4, ""max_depth"": 2, ""lock_coeff"": 0.0, ""max_fraction"": 0.5, ""enable_layer_minus_one"": false, ""boundary_source"": ""target"", ""use_symmetry"": true}",98.0,0.003506183624267578,<Transform Id∘tile_to_target∘tile_to_target>,"[98.0, 98.0, 98.0]"
3
+ example1,"{""beam_width"": 4, ""max_depth"": 2, ""lock_coeff"": 0.0, ""max_fraction"": 0.5, ""enable_layer_minus_one"": false, ""boundary_source"": ""target"", ""use_symmetry"": false}",98.0,0.0017173290252685547,<Transform Id∘tile_to_target∘tile_to_target>,"[98.0, 98.0, 98.0]"
4
+ example1,"{""beam_width"": 4, ""max_depth"": 2, ""lock_coeff"": 0.0, ""max_fraction"": 0.5, ""enable_layer_minus_one"": true, ""boundary_source"": ""target"", ""use_symmetry"": true}",98.0,0.0046575069427490234,<Transform Id∘tile_to_target∘tile_to_target>,"[98.0, 98.0, 98.0]"
5
+ example1,"{""beam_width"": 4, ""max_depth"": 2, ""lock_coeff"": 0.0, ""max_fraction"": 0.5, ""enable_layer_minus_one"": true, ""boundary_source"": ""target"", ""use_symmetry"": false}",98.0,0.0020961761474609375,<Transform Id∘tile_to_target∘tile_to_target>,"[98.0, 98.0, 98.0]"
experiments_analysis.py ADDED
@@ -0,0 +1,159 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ Quick diagnostics for itt_solver experiments.
3
+
4
+ Usage (from notebook or shell):
5
+ python experiments_analysis.py
6
+
7
+ It will:
8
+ - list recent files in experiments/
9
+ - print the latest result.json
10
+ - print depth-0 logs (candidates, gates, residues)
11
+ - load the latest phi_best and compute L1 vs a provided target (if you set TARGET_GRID below)
12
+ - test atomic transforms from default_atomic_factory to see if they change the input
13
+ """
14
+
15
+ import os
16
+ import glob
17
+ import json
18
+ import numpy as np
19
+ from pprint import pprint
20
+
21
+ # === Configure your example target here if you want an automatic L1 check ===
22
+ # Replace TARGET_GRID with your task's target grid (9x9 in your example).
23
+ TARGET_GRID = [
24
+ [0,0,0,0,7,7,0,7,7],
25
+ [0,0,0,7,7,7,7,7,7],
26
+ [0,0,0,0,7,7,0,7,7],
27
+ [0,7,7,0,7,7,0,7,7],
28
+ [7,7,7,7,7,7,7,7,7],
29
+ [0,7,7,0,7,7,0,7,7],
30
+ [0,0,0,0,7,7,0,7,7],
31
+ [0,0,0,7,7,7,7,7,7],
32
+ [0,0,0,0,7,7,0,7,7],
33
+ ]
34
+
35
+ EXPERIMENTS_DIR = "experiments"
36
+
37
+ def list_recent_files(n=20):
38
+ files = sorted(glob.glob(os.path.join(EXPERIMENTS_DIR, "*")))
39
+ print(f"Recent files (last {n}):")
40
+ for f in files[-n:]:
41
+ print(" ", f)
42
+ return files
43
+
44
+ def load_latest_result():
45
+ res_files = sorted(glob.glob(os.path.join(EXPERIMENTS_DIR, "*_result.json")))
46
+ if not res_files:
47
+ print("No result.json files found in experiments/")
48
+ return None, None
49
+ latest = res_files[-1]
50
+ print("\nLatest result file:", latest)
51
+ with open(latest) as fh:
52
+ data = json.load(fh)
53
+ pprint(data)
54
+ return latest, data
55
+
56
+ def load_latest_logs():
57
+ logs_files = sorted(glob.glob(os.path.join(EXPERIMENTS_DIR, "*_logs.json")))
58
+ if not logs_files:
59
+ print("No logs.json files found in experiments/")
60
+ return None, None
61
+ latest = logs_files[-1]
62
+ print("\nLatest logs file:", latest)
63
+ with open(latest) as fh:
64
+ logs = json.load(fh)
65
+ # pretty-print depth 0 entries if present
66
+ if logs and isinstance(logs, list) and len(logs) > 0:
67
+ print("\nDepth 0 log entries (summary):")
68
+ for i, entry in enumerate(logs[0]):
69
+ atomic = entry.get('atomic')
70
+ accepted = entry.get('accepted')
71
+ residue = entry.get('residue')
72
+ energy = entry.get('energy')
73
+ gates = entry.get('gates')
74
+ print(f"{i}: {atomic} | accepted={accepted} | residue={residue} | energy={energy} | gates={gates}")
75
+ else:
76
+ print("Logs format unexpected or empty.")
77
+ return latest, logs
78
+
79
+ def load_latest_phi():
80
+ phi_files = sorted(glob.glob(os.path.join(EXPERIMENTS_DIR, "*_phi_best.npy")))
81
+ if not phi_files:
82
+ print("No phi_best.npy files found in experiments/")
83
+ return None, None
84
+ latest = phi_files[-1]
85
+ print("\nLatest phi_best file:", latest)
86
+ phi = np.load(latest)
87
+ print("phi_best shape:", phi.shape, "unique values:", np.unique(phi))
88
+ return latest, phi
89
+
90
+ def l1_residue_check(phi, target_grid):
91
+ if phi is None:
92
+ print("No phi provided for residue check.")
93
+ return
94
+ target = np.array(target_grid, dtype=phi.dtype)
95
+ if phi.shape != target.shape:
96
+ print("phi and target shapes differ:", phi.shape, target.shape)
97
+ # try to tile/resize target to phi shape if phi is a tiled version
98
+ try:
99
+ from itt_solver.solver_core import tile_transform
100
+ target_resized = tile_transform(target, phi.shape)
101
+ print("Resized target to phi shape for comparison.")
102
+ except Exception:
103
+ print("Could not resize target automatically.")
104
+ return
105
+ else:
106
+ target_resized = target
107
+ l1 = float(np.sum(np.abs(phi - target_resized)))
108
+ print("L1 residue between phi_best and target:", l1)
109
+ return l1
110
+
111
+ def test_atomic_effects():
112
+ print("\nTesting atomic transforms from default_atomic_factory...")
113
+ try:
114
+ from itt_solver.experiment_driver import default_atomic_factory
115
+ from itt_solver.solver_core import initialize_potential, tile_transform
116
+ except Exception as e:
117
+ print("Could not import default_atomic_factory or solver_core:", e)
118
+ return
119
+ params = {'beam_width':6,'max_depth':3,'lock_coeff':0.0,'max_fraction':1.0,'enable_layer_minus_one':True,'boundary_source':'target'}
120
+ task_stub = {'target_shape': (9,9)}
121
+ atomic_library = default_atomic_factory(params, task_stub)
122
+ phi_in = initialize_potential([[0,7,7],[7,7,7],[0,7,7]])
123
+ print("Input shape:", phi_in.shape, "unique:", np.unique(phi_in))
124
+ for T in atomic_library:
125
+ try:
126
+ out = T.apply(phi_in.copy())
127
+ except Exception as e:
128
+ print(repr(T), "apply() raised:", e)
129
+ continue
130
+ # if shapes differ, try to tile output back to input shape for comparison
131
+ out_resized = out
132
+ if out.shape != phi_in.shape:
133
+ try:
134
+ out_resized = tile_transform(out, phi_in.shape)
135
+ except Exception:
136
+ # fallback: broadcast if possible
137
+ try:
138
+ out_resized = np.broadcast_to(out, phi_in.shape)
139
+ except Exception:
140
+ out_resized = None
141
+ if out_resized is None:
142
+ changed = None
143
+ else:
144
+ changed = int(np.sum(out_resized != phi_in))
145
+ print(repr(T), "-> out shape", out.shape, "changed cells (compared to input):", changed)
146
+
147
+ def main():
148
+ print("=== experiments_analysis.py diagnostics ===")
149
+ list_recent_files()
150
+ load_latest_result()
151
+ load_latest_logs()
152
+ _, phi = load_latest_phi()
153
+ if phi is not None:
154
+ l1_residue_check(phi, TARGET_GRID)
155
+ test_atomic_effects()
156
+ print("\nDone.")
157
+
158
+ if __name__ == "__main__":
159
+ main()
itt_solver/README.md.md ADDED
@@ -0,0 +1,19 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # ARC-AGI Reproduction Project
2
+
3
+ **Project goal**
4
+ Recreate the system described in *A Pre-Emergence Mechanics Framework for Solving the Abstraction and Reasoning Corpus ARC-AGI* by Achilles. The codebase implements a boundary constrained field solver (Intent Tensor Theory substrate) and a beam search driver to propose transforms that reduce L1 residue against ARC tasks.
5
+
6
+ **Current status**
7
+ - Core driver `itt_solver/experiment_driver.py` runs experiments and saves artifacts to `experiments/`.
8
+ - Runs complete and save `*_phi_best.npy`, `*_logs.json`, `*_result.json`.
9
+ - Immediate blockers discovered and addressed: W&B recursion removed; logging lacked candidate arrays; some transforms were no-ops. Diagnostics and small patches are in `experiments/` and `itt_solver/` to inspect and fix behavior.
10
+ - Smoke tests show the beam accepts candidates but residue has not decreased for the example task. See `experiments/` for logs.
11
+
12
+ **Quick start minimal run**
13
+ 1. Install dependencies (if not present): `numpy`, `matplotlib`.
14
+ 2. From project root run a single job in a notebook cell:
15
+ ```python
16
+ from itt_solver.experiment_driver import default_atomic_factory, run_single
17
+ # build task and params (example provided in repo)
18
+ res = run_single(task, default_atomic_factory(params, task), params, out_dir="experiments")
19
+ print(res)
itt_solver/__init__.py ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ from .solver_core import (
2
+ initialize_potential,
3
+ discrete_gradient,
4
+ dirichlet_energy,
5
+ sigma_l1,
6
+ tile_transform,
7
+ fill_enclosed,
8
+ Transform,
9
+ beam_minimize,
10
+ )
itt_solver/__pycache__/__init__.cpython-312.pyc ADDED
Binary file (373 Bytes). View file
 
itt_solver/__pycache__/beam_logging.cpython-312.pyc ADDED
Binary file (4.98 kB). View file
 
itt_solver/__pycache__/experiment_driver.cpython-312.pyc ADDED
Binary file (6.84 kB). View file
 
itt_solver/__pycache__/gates.cpython-312.pyc ADDED
Binary file (2.73 kB). View file
 
itt_solver/__pycache__/layer_minus_one.cpython-312.pyc ADDED
Binary file (1.52 kB). View file
 
itt_solver/__pycache__/solver_core.cpython-312.pyc ADDED
Binary file (7.36 kB). View file
 
itt_solver/__pycache__/tests.cpython-312.pyc ADDED
Binary file (14.2 kB). View file
 
itt_solver/__pycache__/transforms.cpython-312.pyc ADDED
Binary file (3.49 kB). View file
 
itt_solver/__pycache__/wandb_runner.cpython-312.pyc ADDED
Binary file (5.87 kB). View file
 
itt_solver/beam_logging.py ADDED
@@ -0,0 +1,142 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import numpy as np
2
+ from .solver_core import (
3
+ sigma_l1,
4
+ dirichlet_energy,
5
+ Transform,
6
+ tile_transform,
7
+ )
8
+ from .gates import validate_gates
9
+ from .layer_minus_one import admissible_edit_mask
10
+
11
+ def _resize_to_target(phi, target):
12
+ if phi.shape == target.shape:
13
+ return phi
14
+ return tile_transform(phi, target.shape)
15
+
16
+ def _compute_boundary_mask(phi_in, phi_target, target_shape, boundary_source='target'):
17
+ """
18
+ boundary_source: 'input' | 'resized_input' | 'target'
19
+ - 'input' uses phi_in != 0 then tiles inside validate_gates (less efficient)
20
+ - 'resized_input' tiles phi_in != 0 to target_shape and returns that mask
21
+ - 'target' uses phi_target != 0 (already target-shaped)
22
+ """
23
+ if boundary_source == 'target':
24
+ return (phi_target != 0)
25
+ if boundary_source == 'resized_input':
26
+ return _resize_to_target((phi_in != 0).astype(int), phi_target).astype(bool)
27
+ # fallback: 'input' -> return original small mask (validate_gates will tile if needed)
28
+ return (phi_in != 0)
29
+
30
+ def beam_minimize_with_log(phi_in, phi_target, atomic_library,
31
+ beam_width=4, max_depth=4, lock_coeff=0.1,
32
+ max_fraction=0.5, allowed_symbols=None,
33
+ enable_layer_minus_one=False,
34
+ boundary_source='target'):
35
+ """
36
+ Beam search with gate validation and optional Layer-1 admissible mask.
37
+ boundary_source controls where ρ_q is derived from: 'target' recommended for expansion tasks.
38
+ Returns best transform, best field, states list, sigmas list, and logs per depth.
39
+ """
40
+ phi_in = np.array(phi_in, dtype=float)
41
+ phi_target = np.array(phi_target, dtype=float)
42
+
43
+ # Resize initial phi to target shape
44
+ phi0 = _resize_to_target(phi_in, phi_target)
45
+
46
+ identity = Transform(lambda p: p, "Id")
47
+ beam = [(identity, phi0, 0.0, [phi0], [sigma_l1(phi0, phi_target)])]
48
+ best = None
49
+
50
+ # Precompute Layer-1 mask if enabled
51
+ layer_mask = None
52
+ if enable_layer_minus_one:
53
+ try:
54
+ layer_mask, _ = admissible_edit_mask(phi0)
55
+ except Exception:
56
+ layer_mask = None
57
+
58
+ # Precompute boundary mask according to boundary_source
59
+ boundary_mask_resized = _compute_boundary_mask(phi_in, phi_target, phi_target.shape, boundary_source=boundary_source)
60
+
61
+ logs = []
62
+
63
+ for depth in range(max_depth):
64
+ candidates = []
65
+ depth_log = []
66
+ for T_cur, cur_field_resized, _, path_states, path_sigmas in beam:
67
+ base_field_for_apply = path_states[-1]
68
+
69
+ for idx, T_atomic in enumerate(atomic_library):
70
+ # Apply atomic transform
71
+ try:
72
+ phi_after_atomic = T_atomic.apply(base_field_for_apply)
73
+ except TypeError:
74
+ phi_after_atomic = T_atomic.apply(phi_in)
75
+
76
+ # Resize to target shape before scoring
77
+ phi_new_resized = _resize_to_target(phi_after_atomic, phi_target)
78
+
79
+ # If Layer-1 enabled and mask exists, allow edits only inside mask by zeroing changes outside
80
+ if enable_layer_minus_one and layer_mask is not None:
81
+ masked = cur_field_resized.copy()
82
+ masked[layer_mask] = phi_new_resized[layer_mask]
83
+ phi_candidate = masked
84
+ else:
85
+ phi_candidate = phi_new_resized
86
+
87
+ residue = sigma_l1(phi_candidate, phi_target)
88
+ energy = dirichlet_energy(phi_candidate)
89
+ score = residue + lock_coeff * energy
90
+
91
+ # Validate gates using precomputed boundary mask
92
+ gates_info = validate_gates(phi_candidate, phi_in, phi_target,
93
+ boundary_mask=boundary_mask_resized,
94
+ max_fraction=max_fraction,
95
+ allowed_symbols=allowed_symbols)
96
+
97
+ # Only accept candidate if gates pass
98
+ if not gates_info.get('passed', False):
99
+ depth_log.append({
100
+ 'atomic': repr(T_atomic),
101
+ 'score': score,
102
+ 'residue': residue,
103
+ 'energy': energy,
104
+ 'gates': gates_info,
105
+ 'accepted': False,
106
+ 'shape': phi_candidate.shape,
107
+ })
108
+ continue
109
+
110
+ new_states = path_states + [phi_candidate]
111
+ new_sigmas = path_sigmas + [residue]
112
+ T_new = T_cur.compose(T_atomic)
113
+
114
+ candidates.append((T_new, phi_candidate, score, new_states, new_sigmas))
115
+
116
+ depth_log.append({
117
+ 'atomic': repr(T_atomic),
118
+ 'score': score,
119
+ 'residue': residue,
120
+ 'energy': energy,
121
+ 'gates': gates_info,
122
+ 'accepted': True,
123
+ 'shape': phi_candidate.shape,
124
+ })
125
+
126
+ logs.append(depth_log)
127
+
128
+ if not candidates:
129
+ break
130
+
131
+ candidates.sort(key=lambda x: x[2])
132
+ beam = candidates[:beam_width]
133
+ best = beam[0]
134
+
135
+ if sigma_l1(best[1], phi_target) == 0:
136
+ break
137
+
138
+ if best is None:
139
+ return identity, phi0, [phi0], [sigma_l1(phi0, phi_target)], logs
140
+
141
+ T_best, phi_best, _, states_best, sigmas_best = best
142
+ return T_best, phi_best, states_best, sigmas_best, logs
itt_solver/experiment_driver.py ADDED
@@ -0,0 +1,112 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import csv
2
+ import os
3
+ import time
4
+ import json
5
+ import importlib
6
+ import numpy as np
7
+ from itertools import product
8
+ from datetime import datetime
9
+ from .solver_core import initialize_potential
10
+ from .beam_logging import beam_minimize_with_log
11
+
12
+ def reload_modules():
13
+ import itt_solver.solver_core as sc
14
+ import itt_solver.beam_logging as bl
15
+ import itt_solver.transforms as tr
16
+ import itt_solver.gates as gates
17
+ import itt_solver.layer_minus_one as l1
18
+ importlib.reload(sc); importlib.reload(bl); importlib.reload(tr); importlib.reload(gates); importlib.reload(l1)
19
+
20
+ def param_grid(grid_dict):
21
+ keys = list(grid_dict.keys())
22
+ vals = [grid_dict[k] for k in keys]
23
+ for combo in product(*vals):
24
+ yield dict(zip(keys, combo))
25
+
26
+ def run_single(task, atomic_library, params, out_dir):
27
+ """
28
+ Run one experiment and save artifacts to out_dir.
29
+ This function does NOT call W&B itself to avoid recursion; call the external
30
+ uploader (itt_solver.wandb_runner.run_and_log_wandb) after this returns if desired.
31
+ """
32
+ os.makedirs(out_dir, exist_ok=True)
33
+
34
+ phi_in = initialize_potential(task['input'])
35
+ phi_target = initialize_potential(task['target'])
36
+ start = time.time()
37
+ T_best, phi_best, states, sigmas, logs = beam_minimize_with_log(
38
+ phi_in, phi_target, atomic_library,
39
+ beam_width=params.get('beam_width',4),
40
+ max_depth=params.get('max_depth',3),
41
+ lock_coeff=params.get('lock_coeff',0.01),
42
+ max_fraction=params.get('max_fraction',0.5),
43
+ allowed_symbols=params.get('allowed_symbols', list(range(10))),
44
+ enable_layer_minus_one=params.get('enable_layer_minus_one', False),
45
+ boundary_source=params.get('boundary_source','target'),
46
+ )
47
+ elapsed = time.time() - start
48
+ result = {
49
+ 'task_name': task.get('name','task'),
50
+ 'params': params,
51
+ 'final_sigma': float(sigmas[-1]) if sigmas else None,
52
+ 'sigma_trace': [float(s) for s in sigmas],
53
+ 'time_s': elapsed,
54
+ 'transform': repr(T_best),
55
+ 'states_count': len(states),
56
+ }
57
+ # save best field and logs
58
+ ts = datetime.utcnow().strftime("%Y%m%dT%H%M%SZ")
59
+ base = f"{task.get('name','task')}_{ts}"
60
+ np.save(os.path.join(out_dir, base + "_phi_best.npy"), phi_best)
61
+ with open(os.path.join(out_dir, base + "_result.json"), "w") as f:
62
+ json.dump(result, f, indent=2)
63
+ with open(os.path.join(out_dir, base + "_logs.json"), "w") as f:
64
+ json.dump(logs, f, default=str)
65
+
66
+ # IMPORTANT: do not call run_and_log_wandb here to avoid recursion.
67
+ # If you want W&B logging, call itt_solver.wandb_runner.run_and_log_wandb(...) externally.
68
+
69
+ return result
70
+
71
+ def sweep(tasks, atomic_library_factory, grid, out_dir="experiments", max_runs=None):
72
+ os.makedirs(out_dir, exist_ok=True)
73
+ reload_modules()
74
+ csv_path = os.path.join(out_dir, "results.csv")
75
+ header_written = os.path.exists(csv_path)
76
+ runs = 0
77
+ with open(csv_path, "a", newline="") as csvfile:
78
+ writer = csv.DictWriter(csvfile, fieldnames=["task_name","params","final_sigma","time_s","transform","sigma_trace"])
79
+ if not header_written:
80
+ writer.writeheader()
81
+ for params in param_grid(grid):
82
+ for task in tasks:
83
+ if max_runs and runs >= max_runs:
84
+ return
85
+ atomic_library = atomic_library_factory(params, task)
86
+ res = run_single(task, atomic_library, params, out_dir)
87
+ writer.writerow({
88
+ "task_name": res['task_name'],
89
+ "params": json.dumps(res['params']),
90
+ "final_sigma": res['final_sigma'],
91
+ "time_s": res['time_s'],
92
+ "transform": res['transform'],
93
+ "sigma_trace": json.dumps(res['sigma_trace']),
94
+ })
95
+ csvfile.flush()
96
+ runs += 1
97
+ return
98
+
99
+ # Example atomic library factory for common ARC families
100
+ def default_atomic_factory(params, task):
101
+ import itt_solver.transforms as tr
102
+ from itt_solver.solver_core import tile_transform # ensure tile_transform is available here
103
+ libs = []
104
+ # always include tile and fill
105
+ libs.append(tr.Transform(lambda p: tile_transform(p, (task['target_shape'][0], task['target_shape'][1])), "tile_to_target"))
106
+ libs.append(tr.FillEnclosedHarmonic())
107
+ # optional rotations/reflections
108
+ if params.get('use_symmetry', True):
109
+ libs.append(tr.Rotate(1))
110
+ libs.append(tr.Reflect('h'))
111
+ return libs
112
+
itt_solver/gates.py ADDED
@@ -0,0 +1,51 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from .solver_core import tile_transform
2
+ import numpy as np
3
+
4
+ def gate_boundary_respect(phi_candidate, phi_in, phi_target, boundary_mask=None):
5
+ """Gate A: candidate must not change locked boundary cells.
6
+ If boundary_mask is None, infer from phi_in != 0.
7
+ If mask shape differs from candidate, resize by tiling.
8
+ """
9
+ if boundary_mask is None:
10
+ boundary_mask = (phi_in != 0)
11
+
12
+ # If mask shape differs, resize it to candidate shape using tiling
13
+ if boundary_mask.shape != phi_candidate.shape:
14
+ # convert to int for tile_transform then back to bool
15
+ try:
16
+ resized = tile_transform(boundary_mask.astype(int), phi_candidate.shape).astype(bool)
17
+ except Exception:
18
+ # fallback: broadcast along axes if possible
19
+ resized = np.broadcast_to(boundary_mask, phi_candidate.shape)
20
+ else:
21
+ resized = boundary_mask
22
+
23
+ # If any boundary cell in phi_target differs from candidate where resized True, reject
24
+ changed = np.any((phi_candidate[resized] != phi_target[resized]))
25
+ return not changed
26
+
27
+
28
+ def gate_sigma_localization(phi_candidate, phi_target, max_fraction=0.5):
29
+ """Gate B: ensure residue is localized. Compute fraction of cells that differ; must be <= max_fraction."""
30
+ diff = (phi_candidate != phi_target)
31
+ frac = np.sum(diff) / diff.size
32
+ return frac <= max_fraction
33
+
34
+ def gate_quantization(phi_candidate, allowed_symbols=None, tol=1e-6):
35
+ """Gate C: candidate values must quantize to allowed_symbols (integers 0-9 by default).
36
+ Allow small numerical tolerance.
37
+ """
38
+ if allowed_symbols is None:
39
+ allowed_symbols = list(range(10))
40
+ # Round candidate to nearest integer and check membership
41
+ rounded = np.rint(phi_candidate).astype(int)
42
+ mask = np.isin(rounded, allowed_symbols)
43
+ return np.all(mask)
44
+
45
+ def validate_gates(phi_candidate, phi_in, phi_target, boundary_mask=None, max_fraction=0.5, allowed_symbols=None):
46
+ """Run all gates and return a dict of booleans and an overall pass boolean."""
47
+ a = gate_boundary_respect(phi_candidate, phi_in, phi_target, boundary_mask)
48
+ b = gate_sigma_localization(phi_candidate, phi_target, max_fraction=max_fraction)
49
+ c = gate_quantization(phi_candidate, allowed_symbols=allowed_symbols)
50
+ passed = a and b and c
51
+ return {"A_boundary": a, "B_localization": b, "C_quantization": c, "passed": passed}
itt_solver/layer_minus_one.py ADDED
@@ -0,0 +1,33 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import numpy as np
2
+ from scipy.ndimage import binary_dilation
3
+
4
+ def compute_complex_phi(phi):
5
+ F = np.fft.fft2(phi)
6
+ phi_c = np.fft.ifft2(F)
7
+ return phi_c
8
+
9
+ def admissible_edit_mask(phi, imag_grad_threshold=None, percentile=95, min_abs=1e-3, dilate_iters=1):
10
+ """
11
+ Robust mask:
12
+ - If imag_grad_threshold provided, use it.
13
+ - Else compute threshold as max(percentile value, min_abs).
14
+ - Optionally dilate mask to give edit zones some area.
15
+ Returns (mask, mag).
16
+ """
17
+ phi_c = compute_complex_phi(phi)
18
+ imag = np.imag(phi_c)
19
+ gx, gy = np.gradient(imag)
20
+ mag = np.sqrt(gx**2 + gy**2)
21
+
22
+ if imag_grad_threshold is None:
23
+ pval = np.percentile(mag, percentile)
24
+ thresh = max(pval, min_abs)
25
+ else:
26
+ thresh = imag_grad_threshold
27
+
28
+ mask = mag > thresh
29
+
30
+ if dilate_iters > 0:
31
+ mask = binary_dilation(mask, iterations=dilate_iters)
32
+
33
+ return mask, mag
itt_solver/solver_core.py ADDED
@@ -0,0 +1,136 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import numpy as np
2
+
3
+ def initialize_potential(grid):
4
+ return np.array(grid, dtype=float)
5
+
6
+ def discrete_gradient(phi):
7
+ gx = np.zeros_like(phi)
8
+ gy = np.zeros_like(phi)
9
+ gx[:-1, :] = phi[1:, :] - phi[:-1, :]
10
+ gy[:, :-1] = phi[:, 1:] - phi[:, :-1]
11
+ mag = np.sqrt(gx**2 + gy**2)
12
+ return gx, gy, mag
13
+
14
+ def dirichlet_energy(phi):
15
+ _, _, mag = discrete_gradient(phi)
16
+ return np.sum(mag**2)
17
+
18
+ def sigma_l1(phi_current, phi_target):
19
+ return np.sum(np.abs(phi_target - phi_current))
20
+
21
+ def tile_transform(phi_in, out_shape):
22
+ h_in, w_in = phi_in.shape
23
+ h_out, w_out = out_shape
24
+ out = np.zeros(out_shape, dtype=float)
25
+ for i in range(h_out):
26
+ for j in range(w_out):
27
+ out[i, j] = phi_in[i % h_in, j % w_in]
28
+ return out
29
+ # paste into itt_solver/solver_core.py replacing old fill_enclosed
30
+ import numpy as np
31
+ from collections import Counter
32
+ from collections import deque
33
+
34
+ import numpy as _np
35
+ from collections import Counter, deque
36
+
37
+ def fill_enclosed(phi, boundary_mask=None):
38
+ """
39
+ Fill zero regions that are fully enclosed by non-zero boundary.
40
+ Returns a new array with enclosed holes filled with modal neighbor color.
41
+ """
42
+ arr = _np.array(phi, dtype=int)
43
+ h, w = arr.shape
44
+ if boundary_mask is None:
45
+ boundary = (arr != 0)
46
+ else:
47
+ boundary = _np.array(boundary_mask, dtype=bool)
48
+
49
+ visited = _np.zeros((h, w), dtype=bool)
50
+ filled = arr.copy()
51
+
52
+ def flood(start_r, start_c):
53
+ q = deque()
54
+ q.append((start_r, start_c))
55
+ comp = []
56
+ touches_border = False
57
+ visited[start_r, start_c] = True
58
+ while q:
59
+ r, c = q.popleft()
60
+ comp.append((r, c))
61
+ if r == 0 or c == 0 or r == h-1 or c == w-1:
62
+ touches_border = True
63
+ for dr, dc in ((1,0),(-1,0),(0,1),(0,-1)):
64
+ nr, nc = r+dr, c+dc
65
+ if 0 <= nr < h and 0 <= nc < w and not visited[nr, nc]:
66
+ if arr[nr, nc] == 0:
67
+ visited[nr, nc] = True
68
+ q.append((nr, nc))
69
+ return comp, touches_border
70
+
71
+ for i in range(h):
72
+ for j in range(w):
73
+ if arr[i, j] == 0 and not visited[i, j]:
74
+ comp, touches_border = flood(i, j)
75
+ if not touches_border:
76
+ neighbor_vals = []
77
+ for (r, c) in comp:
78
+ for dr, dc in ((1,0),(-1,0),(0,1),(0,-1)):
79
+ nr, nc = r+dr, c+dc
80
+ if 0 <= nr < h and 0 <= nc < w:
81
+ v = arr[nr, nc]
82
+ if v != 0:
83
+ neighbor_vals.append(int(v))
84
+ if neighbor_vals:
85
+ mode_color = Counter(neighbor_vals).most_common(1)[0][0]
86
+ else:
87
+ mode_color = 1
88
+ for (r, c) in comp:
89
+ filled[r, c] = mode_color
90
+ return filled
91
+
92
+
93
+ class Transform:
94
+ def __init__(self, func, name, params=None):
95
+ self.func = func
96
+ self.name = name
97
+ self.params = params or {}
98
+
99
+ def apply(self, phi):
100
+ return self.func(phi, **self.params)
101
+
102
+ def compose(self, other):
103
+ def composed(phi):
104
+ return other.apply(self.apply(phi))
105
+ return Transform(lambda p: composed(p), f"{self.name}∘{other.name}")
106
+
107
+ def __repr__(self):
108
+ return f"<Transform {self.name}>"
109
+
110
+ def beam_minimize(phi_in, phi_target, atomic_library, beam_width=4, max_depth=4, lock_coeff=0.1):
111
+ identity = Transform(lambda p: p, "Id")
112
+ beam = [(identity, phi_in, 0.0)]
113
+ best = None
114
+
115
+ for depth in range(max_depth):
116
+ candidates = []
117
+ for T_cur, _, _ in beam:
118
+ for T_atomic in atomic_library:
119
+ T_new = T_cur.compose(T_atomic)
120
+ phi_new = T_new.apply(phi_in)
121
+ residue = sigma_l1(phi_new, phi_target)
122
+ energy = dirichlet_energy(phi_new)
123
+ score = residue + lock_coeff * energy
124
+ candidates.append((T_new, phi_new, score))
125
+
126
+ if not candidates:
127
+ break
128
+
129
+ candidates.sort(key=lambda x: x[2])
130
+ beam = candidates[:beam_width]
131
+ best = beam[0]
132
+
133
+ if sigma_l1(best[1], phi_target) == 0:
134
+ break
135
+
136
+ return best[0], best[1]
itt_solver/tests.py ADDED
@@ -0,0 +1,297 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ Utility tests and debug helpers for the itt_solver beam runs.
3
+
4
+ Save this file into the `itt_solver/` package and import the helpers from your notebook.
5
+ Example usage (in a notebook cell after a run that produced `states, sigmas, logs, phi_target`):
6
+
7
+ from itt_solver import tests
8
+ tests.print_depth0_logs(logs)
9
+ tests.check_first_accepted_score(logs, lock_coeff=0.01)
10
+ tests.gate_failure_summary(logs)
11
+ tests.plot_layer1_mask(states[0], l1_module) # pass the layer_minus_one module you imported as l1
12
+
13
+ Added helpers:
14
+ - transform_effect_test(transform, phi): returns number of changed cells and a small diff map.
15
+ - sigma_decrease_smoke_test(beam_func, phi_in, phi_target, atomic_library): runs a relaxed beam and reports sigma trace and whether sigma decreased.
16
+ - run_all_quick_checks(...) convenience runner for quick local verification.
17
+ """
18
+
19
+ from pprint import pprint
20
+ from collections import Counter
21
+ import matplotlib.pyplot as plt
22
+ import numpy as np
23
+
24
+ def print_depth0_logs(logs):
25
+ """Pretty-print depth-0 logs and accepted candidate summary."""
26
+ if not logs:
27
+ print("No logs available.")
28
+ return
29
+ if len(logs) <= 0:
30
+ print("Logs list is empty.")
31
+ return
32
+
33
+ depth0 = logs[0]
34
+ print("Depth 0 log entries:", len(depth0))
35
+ pprint(depth0)
36
+
37
+ accepted = [r for r in depth0 if r.get('accepted')]
38
+ print("Accepted count at depth 0:", len(accepted))
39
+ for i, r in enumerate(accepted):
40
+ print(i, r.get('atomic'), "score", r.get('score'), "gates", r.get('gates'))
41
+
42
+
43
+ def check_first_accepted_score(logs, lock_coeff=0.01, tolerance=1e-8):
44
+ """
45
+ Find the first accepted candidate in depth 0 and assert score == residue + lock_coeff * energy.
46
+ Returns True if check passes, False otherwise.
47
+ """
48
+ if not logs or len(logs) == 0:
49
+ print("No logs to check.")
50
+ return False
51
+
52
+ first_accepted = next((r for r in logs[0] if r.get('accepted')), None)
53
+ if first_accepted is None:
54
+ print("No accepted candidates at depth 0. See logs for gate failures.")
55
+ return False
56
+
57
+ res = first_accepted.get('residue')
58
+ E = first_accepted.get('energy')
59
+ score = first_accepted.get('score')
60
+ if res is None or E is None or score is None:
61
+ print("Missing numeric fields in the candidate log.")
62
+ return False
63
+
64
+ ok = abs(score - (res + lock_coeff * E)) < tolerance
65
+ if ok:
66
+ print("Score check passed for first accepted candidate.")
67
+ else:
68
+ print("Score check FAILED.")
69
+ print(f"Logged score: {score}")
70
+ print(f"Computed : {res + lock_coeff * E}")
71
+ return ok
72
+
73
+
74
+ def gate_failure_summary(logs):
75
+ """
76
+ Count gate failures in depth 0 and print a summary.
77
+ Returns a Counter with failure counts.
78
+ """
79
+ if not logs or len(logs) == 0:
80
+ print("No logs to summarize.")
81
+ return Counter()
82
+
83
+ c = Counter()
84
+ for r in logs[0]:
85
+ g = r.get('gates')
86
+ if not g:
87
+ c['no_gate_info'] += 1
88
+ continue
89
+ if not g.get('A_boundary', True):
90
+ c['A_boundary_failed'] += 1
91
+ if not g.get('B_localization', True):
92
+ c['B_localization_failed'] += 1
93
+ if not g.get('C_quantization', True):
94
+ c['C_quantization_failed'] += 1
95
+ if g.get('passed') is False:
96
+ c['total_rejected'] += 1
97
+ else:
98
+ c['total_accepted'] += 1
99
+ print("Gate failure summary (depth 0):", dict(c))
100
+ return c
101
+
102
+
103
+ def plot_layer1_mask(state, l1_module, imag_grad_threshold=None, figsize=(4,4)):
104
+ """
105
+ Compute and plot the Layer-1 admissible edit mask and magnitude.
106
+ - state: a NumPy array (resized candidate / phi field).
107
+ - l1_module: the imported layer_minus_one module (e.g., import itt_solver.layer_minus_one as l1).
108
+ - imag_grad_threshold: optional threshold to pass through to admissible_edit_mask.
109
+ """
110
+ if state is None:
111
+ print("No state provided.")
112
+ return
113
+
114
+ try:
115
+ mask, mag = l1_module.admissible_edit_mask(state, imag_grad_threshold)
116
+ except Exception as e:
117
+ print("Error computing Layer-1 mask:", e)
118
+ return
119
+
120
+ plt.figure(figsize=figsize)
121
+ plt.imshow(mask, cmap='gray')
122
+ plt.title('Layer-1 admissible edit mask')
123
+ plt.axis('off')
124
+ plt.show()
125
+
126
+ plt.figure(figsize=figsize)
127
+ plt.imshow(mag, cmap='magma')
128
+ plt.title('||∇Im(Φ_c)|| magnitude')
129
+ plt.colorbar()
130
+ plt.axis('off')
131
+ plt.show()
132
+
133
+
134
+ def assert_states_shape(states, phi_target):
135
+ """Assert all states have the same shape as phi_target. Returns True if OK, False otherwise."""
136
+ if not states:
137
+ print("No states provided.")
138
+ return False
139
+ target_shape = tuple(phi_target.shape)
140
+ for i, s in enumerate(states):
141
+ if tuple(s.shape) != target_shape:
142
+ print(f"State {i} shape mismatch: {s.shape} != {target_shape}")
143
+ return False
144
+ print("All states match target shape:", target_shape)
145
+ return True
146
+
147
+
148
+ # --- New tests added below ---------------------------------------------------
149
+
150
+ def transform_effect_test(transform, phi, show_diff=False):
151
+ """
152
+ Apply a Transform-like object (must have .apply(phi)) to phi and return:
153
+ - changed_count: number of cells that differ after transform
154
+ - diff_map: boolean array where True indicates changed cells
155
+ If show_diff True, also plot the diff map.
156
+ """
157
+ if not hasattr(transform, 'apply'):
158
+ raise ValueError("transform must have an apply(phi) method")
159
+ phi = np.array(phi, dtype=float)
160
+ phi_after = transform.apply(phi.copy())
161
+ if phi_after.shape != phi.shape:
162
+ # try to resize phi_after to phi shape by simple tiling if shapes differ
163
+ from .solver_core import tile_transform
164
+ try:
165
+ phi_after = tile_transform(phi_after, phi.shape)
166
+ except Exception:
167
+ # fallback: broadcast if possible
168
+ phi_after = np.broadcast_to(phi_after, phi.shape)
169
+ diff_map = (phi_after != phi)
170
+ changed_count = int(np.sum(diff_map))
171
+ if show_diff:
172
+ plt.figure(figsize=(4,4))
173
+ plt.imshow(diff_map, cmap='gray')
174
+ plt.title(f'Transform effect diff (changed={changed_count})')
175
+ plt.axis('off')
176
+ plt.show()
177
+ return changed_count, diff_map
178
+
179
+
180
+ def sigma_decrease_smoke_test(beam_func, phi_in, phi_target, atomic_library,
181
+ beam_kwargs=None):
182
+ """
183
+ Run a relaxed beam (lock_coeff=0, max_fraction=1.0) to check whether sigma can decrease.
184
+ beam_func: callable with signature beam_func(phi_in, phi_target, atomic_library, **kwargs)
185
+ Returns a dict with keys:
186
+ - 'sigmas': sigma trace list
187
+ - 'decreased': True if final sigma < initial sigma
188
+ - 'result': tuple returned by beam_func
189
+ """
190
+ beam_kwargs = dict(beam_kwargs or {})
191
+ # enforce relaxed settings for smoke test
192
+ beam_kwargs.setdefault('lock_coeff', 0.0)
193
+ beam_kwargs.setdefault('max_fraction', 1.0)
194
+ beam_kwargs.setdefault('enable_layer_minus_one', True)
195
+ beam_kwargs.setdefault('boundary_source', 'target')
196
+ # allow all quantized symbols for this test
197
+ beam_kwargs.setdefault('allowed_symbols', list(range(10)))
198
+
199
+ result = beam_func(phi_in, phi_target, atomic_library, **beam_kwargs)
200
+ # beam_func expected to return (T_best, phi_best, states, sigmas, logs)
201
+ if not result or len(result) < 4:
202
+ return {'sigmas': None, 'decreased': False, 'result': result}
203
+
204
+ sigmas = result[3]
205
+ decreased = False
206
+ if sigmas and len(sigmas) >= 2:
207
+ decreased = float(sigmas[-1]) < float(sigmas[0])
208
+ return {'sigmas': sigmas, 'decreased': decreased, 'result': result}
209
+
210
+
211
+ def run_all_quick_checks(states, logs, phi_target, l1_module=None, lock_coeff=0.01,
212
+ beam_smoke_runner=None, phi_in=None, atomic_library=None):
213
+ """
214
+ Convenience runner that executes the basic checks and the smoke sigma test (if beam_smoke_runner provided).
215
+ Returns a dict of results.
216
+ """
217
+ results = {}
218
+ results['shape_ok'] = assert_states_shape(states, phi_target)
219
+ try:
220
+ print_depth0_logs(logs)
221
+ results['print_logs'] = True
222
+ except Exception:
223
+ results['print_logs'] = False
224
+
225
+ results['first_accepted_score_ok'] = check_first_accepted_score(logs, lock_coeff=lock_coeff)
226
+ results['gate_summary'] = gate_failure_summary(logs)
227
+ if l1_module is not None:
228
+ try:
229
+ print("Plotting Layer-1 mask for states[0]...")
230
+ plot_layer1_mask(states[0], l1_module)
231
+ results['layer1_plotted'] = True
232
+ except Exception:
233
+ results['layer1_plotted'] = False
234
+
235
+ if beam_smoke_runner is not None and phi_in is not None and atomic_library is not None:
236
+ print("Running sigma decrease smoke test (relaxed beam)...")
237
+ smoke = sigma_decrease_smoke_test(beam_smoke_runner, phi_in, phi_target, atomic_library)
238
+ results['smoke_sigmas'] = smoke.get('sigmas')
239
+ results['smoke_decreased'] = smoke.get('decreased')
240
+ return results
241
+
242
+ def run_atomic_effects(task_input=None, params=None, target_shape=(9,9)):
243
+ """
244
+ Build the default atomic library and report whether each transform changes the provided task_input.
245
+ Prints shape and changed-cell counts.
246
+
247
+ - task_input: either a NumPy array or a small grid (list of lists). If None, a default 3x3 example is used.
248
+ - params: dict of parameters passed to default_atomic_factory (optional).
249
+ - target_shape: tuple used to construct the atomic library via default_atomic_factory.
250
+ """
251
+ # lazy imports to avoid top-level dependency issues
252
+ from .experiment_driver import default_atomic_factory
253
+ from .solver_core import initialize_potential, tile_transform
254
+ import numpy as _np
255
+
256
+ if task_input is None:
257
+ task_input = [[0,7,7],[7,7,7],[0,7,7]]
258
+ phi_in = initialize_potential(task_input)
259
+
260
+ if params is None:
261
+ params = {'beam_width':6,'max_depth':3,'lock_coeff':0.0,'max_fraction':1.0,'enable_layer_minus_one':True,'boundary_source':'target'}
262
+
263
+ task_stub = {'target_shape': target_shape}
264
+ atomic_library = default_atomic_factory(params, task_stub)
265
+
266
+ print("Testing atomic library transforms on input shape", phi_in.shape)
267
+ results = []
268
+ for T in atomic_library:
269
+ try:
270
+ phi_after = T.apply(phi_in.copy())
271
+ except Exception as e:
272
+ print(f"{repr(T)} raised exception during apply(): {e}")
273
+ results.append({'transform': repr(T), 'error': str(e)})
274
+ continue
275
+
276
+ # If shapes differ, try to tile phi_after to phi_in shape for comparison
277
+ if phi_after.shape != phi_in.shape:
278
+ try:
279
+ phi_after_resized = tile_transform(phi_after, phi_in.shape)
280
+ except Exception:
281
+ try:
282
+ phi_after_resized = _np.broadcast_to(phi_after, phi_in.shape)
283
+ except Exception:
284
+ phi_after_resized = None
285
+ else:
286
+ phi_after_resized = phi_after
287
+
288
+ if phi_after_resized is None:
289
+ changed = None
290
+ else:
291
+ diff_map = (phi_after_resized != phi_in)
292
+ changed = int(_np.sum(diff_map))
293
+
294
+ print(repr(T), "-> out shape", None if phi_after is None else phi_after.shape, "changed cells:", changed)
295
+ results.append({'transform': repr(T), 'out_shape': None if phi_after is None else phi_after.shape, 'changed_cells': changed})
296
+
297
+ return results
itt_solver/transforms.py ADDED
@@ -0,0 +1,50 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import numpy as np
2
+ from .solver_core import tile_transform, fill_enclosed
3
+
4
+ class Transform:
5
+ def __init__(self, func, name):
6
+ self.func = func
7
+ self.name = name
8
+ def apply(self, phi):
9
+ return self.func(phi)
10
+ def __repr__(self):
11
+ return f"<Transform {self.name}>"
12
+
13
+ def tile_to_target_shifted(shift=(1,1), tile_factor=3):
14
+ """
15
+ Tile the input to a canvas tile_factor x tile_factor times, apply a small roll,
16
+ and return the tiled canvas. This guarantees the transform output differs from input.
17
+ """
18
+ def fn(phi):
19
+ h_in, w_in = phi.shape
20
+ out_shape = (h_in * tile_factor, w_in * tile_factor)
21
+ tiled = tile_transform(phi, out_shape)
22
+ tiled = np.roll(tiled, shift=shift, axis=(0,1))
23
+ return tiled
24
+ return Transform(fn, f"tile_to_target_shift{shift}")
25
+
26
+ def FillEnclosedHarmonic(boundary_mask=None):
27
+ def fn(phi):
28
+ bm = (phi != 0) if boundary_mask is None else boundary_mask
29
+ return fill_enclosed(phi, bm)
30
+ return Transform(fn, "FillEnclosedHarmonic")
31
+
32
+ def Rotate(k=1):
33
+ def fn(phi):
34
+ return np.rot90(phi, k)
35
+ return Transform(fn, f"Rotate_{90*k}")
36
+
37
+ def Reflect(axis='h'):
38
+ def fn(phi):
39
+ if axis == 'h':
40
+ return np.flipud(phi)
41
+ return np.fliplr(phi)
42
+ return Transform(fn, f"Reflect_{axis}")
43
+
44
+ def ColorMap(mapping):
45
+ def fn(phi):
46
+ out = phi.copy()
47
+ for k,v in mapping.items():
48
+ out[phi==k] = v
49
+ return out
50
+ return Transform(fn, f"ColorMap_{mapping}")
itt_solver/viz.py ADDED
@@ -0,0 +1,30 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import matplotlib.pyplot as plt
2
+
3
+ def show_grid(grid, title=None, cmap='tab10', vmin=0, vmax=9):
4
+ plt.figure(figsize=(3, 3))
5
+ plt.imshow(grid, cmap=cmap, vmin=vmin, vmax=vmax)
6
+ plt.colorbar(fraction=0.046, pad=0.04)
7
+ if title:
8
+ plt.title(title)
9
+ plt.axis('off')
10
+ plt.show()
11
+
12
+ def show_side_by_side(grids, titles, cmap='tab10', vmin=0, vmax=9):
13
+ n = len(grids)
14
+ plt.figure(figsize=(3 * n, 3))
15
+ for i, (g, t) in enumerate(zip(grids, titles)):
16
+ plt.subplot(1, n, i + 1)
17
+ plt.imshow(g, cmap=cmap, vmin=vmin, vmax=vmax)
18
+ plt.title(t)
19
+ plt.axis('off')
20
+ plt.tight_layout()
21
+ plt.show()
22
+
23
+ def plot_sigma_trace(sigmas, title="σ trace"):
24
+ plt.figure(figsize=(4, 3))
25
+ plt.plot(range(len(sigmas)), sigmas, marker='o')
26
+ plt.xlabel("Step")
27
+ plt.ylabel("σ (L1 residue)")
28
+ plt.title(title)
29
+ plt.grid(True, alpha=0.3)
30
+ plt.show()
itt_solver/wandb_runner.py ADDED
@@ -0,0 +1,117 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ import json
3
+ import numpy as np
4
+ import matplotlib.pyplot as plt
5
+ import wandb
6
+
7
+ from typing import Any, Dict
8
+ from itt_solver import experiment_driver as ed
9
+
10
+ def _save_phi_image(phi: np.ndarray, path: str):
11
+ """Save a small visualization of phi to path (PNG)."""
12
+ fig, ax = plt.subplots(figsize=(4,4))
13
+ ax.imshow(phi, cmap='tab20')
14
+ ax.axis('off')
15
+ fig.savefig(path, bbox_inches='tight', dpi=150)
16
+ plt.close(fig)
17
+
18
+ def run_and_log_wandb(task: Dict[str, Any],
19
+ atomic_library,
20
+ params: Dict[str, Any],
21
+ out_dir: str = "experiments",
22
+ wandb_project: str = "itt_solver",
23
+ wandb_entity: str | None = None,
24
+ resume: bool | str = "allow"):
25
+ """
26
+ Run a single experiment via experiment_driver.run_single and log results to W&B
27
+ using a context manager (with wandb.init(...) as run: ...).
28
+
29
+ - task: dict with keys 'name','input','target','target_shape' (same format used by experiment_driver).
30
+ - atomic_library: list of Transform objects for the run.
31
+ - params: hyperparameter dict passed to run_single (will also be logged to W&B).
32
+ - out_dir: local directory where run_single writes artifacts (result.json, logs.json, phi_best.npy).
33
+ - wandb_project: W&B project name.
34
+ - wandb_entity: optional W&B entity (team/user).
35
+ - anonymous: "allow" to log anonymously if no API key is set, or False/None to require login.
36
+ """
37
+ # Run the experiment locally first (this writes files under out_dir)
38
+ result = ed.run_single(task, atomic_library, params, out_dir)
39
+
40
+ # Compose artifact base name used by run_single (it uses timestamped base)
41
+ # We expect run_single saved files: <base>_phi_best.npy, <base>_result.json, <base>_logs.json
42
+ # result dict contains 'transform' and 'final_sigma' and 'sigma_trace' etc.
43
+ # Find the most recent files for this task in out_dir
44
+ base_prefix = task.get('name', 'task')
45
+ # find candidate files saved by run_single (match base prefix and timestamp)
46
+ files = sorted([f for f in os.listdir(out_dir) if f.startswith(base_prefix) and ("_result.json" in f or "_phi_best.npy" in f or "_logs.json" in f)])
47
+ # group by base (strip suffix)
48
+ bases = sorted({f.rsplit("_result",1)[0].rsplit("_phi_best",1)[0].rsplit("_logs",1)[0] for f in files})
49
+ # choose last base if multiple
50
+ base = bases[-1] if bases else None
51
+
52
+ # Prepare artifact file paths if available
53
+ phi_path = os.path.join(out_dir, base + "_phi_best.npy") if base else None
54
+ result_path = os.path.join(out_dir, base + "_result.json") if base else None
55
+ logs_path = os.path.join(out_dir, base + "_logs.json") if base else None
56
+
57
+ # Start W&B run using context manager
58
+ with wandb.init(project=wandb_project,
59
+ entity=wandb_entity,
60
+ config=params,
61
+ name=f"{task.get('name','task')}_{int(wandb.util.generate_id(), 36)}",
62
+ reinit=True,
63
+ resume=resume) as run:
64
+
65
+ # Log scalar metrics
66
+ try:
67
+ run.log({
68
+ "final_sigma": result.get("final_sigma"),
69
+ "time_s": result.get("time_s"),
70
+ "states_count": result.get("states_count")
71
+ })
72
+ except Exception:
73
+ pass
74
+
75
+ # Log sigma trace as a series
76
+ try:
77
+ run.log({"sigma_trace": result.get("sigma_trace", [])})
78
+ except Exception:
79
+ pass
80
+
81
+ # Attach artifacts (phi_best, result.json, logs.json) if present
82
+ try:
83
+ art = wandb.Artifact(f"{task.get('name','task')}_run", type="itt_run")
84
+ if phi_path and os.path.exists(phi_path):
85
+ art.add_file(phi_path, name="phi_best.npy")
86
+ if result_path and os.path.exists(result_path):
87
+ art.add_file(result_path, name="result.json")
88
+ if logs_path and os.path.exists(logs_path):
89
+ art.add_file(logs_path, name="logs.json")
90
+ run.log_artifact(art)
91
+ except Exception:
92
+ pass
93
+
94
+ # Log a small image preview of phi_best
95
+ try:
96
+ if phi_path and os.path.exists(phi_path):
97
+ phi = np.load(phi_path)
98
+ tmp_png = os.path.join(out_dir, base + "_phi_preview.png")
99
+ _save_phi_image(phi, tmp_png)
100
+ run.log({"phi_best_image": wandb.Image(tmp_png)})
101
+ # optionally remove tmp_png
102
+ try:
103
+ os.remove(tmp_png)
104
+ except Exception:
105
+ pass
106
+ except Exception:
107
+ pass
108
+
109
+ # Return the local result dict for further local analysis
110
+ return result
111
+
112
+
113
+ # Example usage (call from a notebook cell):
114
+ # from itt_solver.wandb_runner import run_and_log_wandb
115
+ # res = run_and_log_wandb(task, atomic_library, params, out_dir="experiments",
116
+ # wandb_project="itt_solver", anonymous="allow")
117
+ # print("W&B logged run, final sigma:", res.get("final_sigma"))
scripts/fix_and_inspect_logs.py ADDED
@@ -0,0 +1,107 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import glob, json, numpy as np, os
2
+ from pprint import pprint
3
+
4
+ def load_latest(pattern):
5
+ files = sorted(glob.glob(pattern))
6
+ return files[-1] if files else None
7
+
8
+ logs_path = load_latest("experiments/*_logs.json")
9
+ phi_path = load_latest("experiments/*_phi_best.npy")
10
+ res_path = load_latest("experiments/*_result.json")
11
+
12
+ print("logs:", logs_path)
13
+ print("phi_best:", phi_path)
14
+ print("result:", res_path)
15
+
16
+ if not logs_path:
17
+ raise SystemExit("No logs file found")
18
+
19
+ logs = json.load(open(logs_path))
20
+ res = json.load(open(res_path)) if res_path else {}
21
+
22
+ # coerce gate values to booleans for all depth entries
23
+ def coerce_gates(g):
24
+ if not isinstance(g, dict):
25
+ return g
26
+ out = {}
27
+ for k,v in g.items():
28
+ if isinstance(v, str):
29
+ lv = v.strip().lower()
30
+ if lv in ("true","1","yes"):
31
+ out[k] = True
32
+ elif lv in ("false","0","no"):
33
+ out[k] = False
34
+ else:
35
+ # try numeric
36
+ try:
37
+ out[k] = bool(int(v))
38
+ except Exception:
39
+ out[k] = v
40
+ else:
41
+ out[k] = v
42
+ return out
43
+
44
+ for depth_idx, depth in enumerate(logs):
45
+ for entry in depth:
46
+ if 'gates' in entry:
47
+ entry['gates'] = coerce_gates(entry['gates'])
48
+
49
+ # attach phi_best into the first accepted entry (if not present)
50
+ accepted_entry = None
51
+ for entry in logs[0]:
52
+ if entry.get('accepted'):
53
+ accepted_entry = entry
54
+ break
55
+
56
+ phi = np.load(phi_path) if phi_path else None
57
+ if accepted_entry is not None:
58
+ if 'candidate_array' not in accepted_entry:
59
+ accepted_entry['candidate_array'] = phi.tolist() if phi is not None else None
60
+
61
+ # recompute residue for that candidate vs a target (use target from your task if available)
62
+ # If result does not contain the target, you can edit TARGET_GRID below to match your task.
63
+ TARGET_GRID = [
64
+ [0,0,0,0,7,7,0,7,7],
65
+ [0,0,0,7,7,7,7,7,7],
66
+ [0,0,0,0,7,7,0,7,7],
67
+ [0,7,7,0,7,7,0,7,7],
68
+ [7,7,7,7,7,7,7,7,7],
69
+ [0,7,7,0,7,7,0,7,7],
70
+ [0,0,0,0,7,7,0,7,7],
71
+ [0,7,7,0,7,7,0,7,7],
72
+ [0,0,0,0,7,7,0,7,7],
73
+ ]
74
+ TARGET = np.array(TARGET_GRID, dtype=int)
75
+
76
+ def tile_transform(phi, out_shape):
77
+ # minimal tile_transform fallback: tile or crop to out_shape
78
+ a = np.array(phi)
79
+ h_out, w_out = out_shape
80
+ h_in, w_in = a.shape
81
+ reps_h = (h_out + h_in - 1) // h_in
82
+ reps_w = (w_out + w_in - 1) // w_in
83
+ tiled = np.tile(a, (reps_h, reps_w))
84
+ return tiled[:h_out, :w_out]
85
+
86
+ if accepted_entry is not None and accepted_entry.get('candidate_array') is not None:
87
+ cand = np.array(accepted_entry['candidate_array'], dtype=float)
88
+ if cand.shape != TARGET.shape:
89
+ cand_resized = tile_transform(cand, TARGET.shape)
90
+ else:
91
+ cand_resized = cand
92
+ cand_q = np.rint(cand_resized).astype(int)
93
+ l1 = float(np.sum(np.abs(cand_q - TARGET)))
94
+ print("Recomputed L1 residue for first accepted candidate:", l1)
95
+ print("Candidate unique values:", np.unique(cand_q))
96
+ diff = (cand_q != TARGET).astype(int)
97
+ print("Changed cells count:", int(diff.sum()))
98
+ print("Diff map (1=diff):")
99
+ print(diff)
100
+ else:
101
+ print("No candidate array available in logs or phi_best missing.")
102
+
103
+ # write fixed logs copy
104
+ fixed_path = logs_path.replace("_logs.json", "_logs.fixed.json")
105
+ with open(fixed_path, "w") as fh:
106
+ json.dump(logs, fh, indent=2)
107
+ print("Wrote fixed logs to", fixed_path)