Spaces:
Sleeping
Sleeping
Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
|
@@ -19,143 +19,120 @@ license: mit
|
|
| 19 |
short_description: OpenEnv RL env that teaches an LLM to decode quantum errors.
|
| 20 |
---
|
| 21 |
|
| 22 |
-
# Qubit-Medic
|
| 23 |
|
| 24 |
-
|
| 25 |
-
> AlphaQubit-style recipe (*Nature* 2024): a language model as decoder with
|
| 26 |
-
> verifiable rewards—implemented on **Stim + PyMatching**, an **OpenEnv**-style
|
| 27 |
-
> HTTP contract, **SFT warm-up + GRPO** (TRL/Unsloth), and **multi-component
|
| 28 |
-
> rewards** that are hard to game.
|
| 29 |
|
| 30 |

|
| 31 |
|
| 32 |
-
|
| 33 |
|
| 34 |
-
- **Space:** [
|
| 35 |
-
- **
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 36 |
|
| 37 |
-
|
| 38 |
|
| 39 |
-
|
| 40 |
-
- **SFT run** (`sft-20260426-045056`): [https://wandb.ai/ronitraj/QuantumScribe-GRPO/runs/yli513jl](https://wandb.ai/ronitraj/QuantumScribe-GRPO/runs/yli513jl)
|
| 41 |
-
- **GRPO run** (`grpo-20260426-045324`): [https://wandb.ai/ronitraj/QuantumScribe-GRPO/runs/4p7eurnc](https://wandb.ai/ronitraj/QuantumScribe-GRPO/runs/4p7eurnc) — run id `4p7eurnc` (e.g. best step ~1300, in-loop eval, artifacts)
|
| 42 |
|
| 43 |
-
---
|
| 44 |
|
| 45 |
-
|
| 46 |
|
| 47 |
-
|
| 48 |
-
|----------|-----|
|
| 49 |
-
| **Hugging Face Space (live demo + API)** | [ronitraj/QuantumScribe](https://huggingface.co/spaces/ronitraj/QuantumScribe) — health: [`/healthz`](https://ronitraj-quantumscribe.hf.space/healthz) |
|
| 50 |
-
| **Trained LoRA on the Hub** | [ronitraj/quantumscribe](https://huggingface.co/ronitraj/quantumscribe) (PEFT adapter + tokenizer) |
|
| 51 |
-
| **W&B project** | [ronitraj/QuantumScribe-GRPO](https://wandb.ai/ronitraj/QuantumScribe-GRPO) |
|
| 52 |
-
| **W&B — SFT run** | [runs/yli513jl](https://wandb.ai/ronitraj/QuantumScribe-GRPO/runs/yli513jl) |
|
| 53 |
-
| **W&B — GRPO run** | [runs/4p7eurnc](https://wandb.ai/ronitraj/QuantumScribe-GRPO/runs/4p7eurnc) |
|
| 54 |
-
| **Colab training** | [`notebooks/colab_train.ipynb`](notebooks/colab_train.ipynb) |
|
| 55 |
-
| **Local Gradio** | `python app_gradio.py` |
|
| 56 |
-
| **OpenEnv manifest** | [`openenv.yaml`](openenv.yaml) |
|
| 57 |
|
| 58 |
-
|
| 59 |
|
| 60 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 61 |
|
| 62 |
-
|
| 63 |
|
| 64 |
-
|
| 65 |
-
- **SFT** then **GRPO** (reward from a real Stim environment, not offline labels)
|
| 66 |
-
- **OpenEnv**-compatible server: `/reset` / `/step` / state & schema
|
| 67 |
-
- **Five** logged reward components (aggregate is weighted)
|
| 68 |
|
| 69 |
-
|
| 70 |
-
|
| 71 |
-
|
|
| 72 |
-
|
|
| 73 |
-
|
|
| 74 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 75 |
|
| 76 |
---
|
| 77 |
|
| 78 |
-
##
|
| 79 |
|
| 80 |
-
|
| 81 |
|
| 82 |
| Metric | Value |
|
| 83 |
|--------|------:|
|
| 84 |
-
| `logical_correction_rate` | 0.964 |
|
| 85 |
-
| `
|
| 86 |
-
| `format_compliance_rate` | 1.0 |
|
| 87 |
| `mean_hamming_overlap` | 0.8405 |
|
| 88 |
| `mean_total_reward` | ~0.821 |
|
| 89 |
| `exact_match_pymatching` | 0.734 |
|
|
|
|
| 90 |
|
| 91 |
-
|
|
|
|
|
|
|
| 92 |
|
| 93 |
-
|
| 94 |
|
| 95 |
-
|
| 96 |
-
python -m scripts.eval --adapter /path/to/grpo/adapter --episodes 1000 --out data/eval_grpo.json
|
| 97 |
-
```
|
| 98 |
|
| 99 |
-
|
|
|
|
| 100 |
|
| 101 |
---
|
| 102 |
|
| 103 |
-
##
|
| 104 |
-
|
| 105 |
-
| File | Purpose |
|
| 106 |
-
|------|--------|
|
| 107 |
-
| [data/eval_grpo.json](data/eval_grpo.json) | **Primary eval** — single JSON summary (episodes, `logical_correction_rate`, `pymatching_beat_rate`, overlaps, `level`, etc.) from `scripts.eval`. |
|
| 108 |
-
| [data/grpo_validation.jsonl](data/grpo_validation.jsonl) | GRPO **validation** prompts / episodes (one JSON object per line; curriculum, syndrome, seeds). |
|
| 109 |
-
| [data/sft_dataset_analysis.json](data/sft_dataset_analysis.json) | **SFT dataset report** — stats (completion lengths, level mix, train/val overlap, `eval_windows`). |
|
| 110 |
-
| [data/sft_validation.jsonl](data/sft_validation.jsonl) | SFT **held-out** set used during training. |
|
| 111 |
-
| [data/sft_dataset_sample.jsonl](data/sft_dataset_sample.jsonl) | Small **sample** of SFT training rows (prompt + metadata). |
|
| 112 |
-
|
| 113 |
-
Generated on demand (not always committed) after `make baselines` / SFT / Willow runs, per [.gitignore](.gitignore):
|
| 114 |
-
|
| 115 |
-
- `data/baseline_results.json` — random / zeros / PyMatching baselines
|
| 116 |
-
- `data/sft_dataset.jsonl` — full SFT train (from `make sft-data` or `generate_sft_data`)
|
| 117 |
-
- `data/willow_validation.json`, `data/willow_d3.dem` — cross-distribution checks
|
| 118 |
|
| 119 |
-
|
| 120 |
-
|
| 121 |
-
|
| 122 |
-
|
| 123 |
-
Provenance and regeneration: [figures/FIGURES.md](figures/FIGURES.md). The three **trajectory** plots below are **illustrative** (from `make plots` / baseline-anchored synthetic mode), not a raw W&B export—replace with `scripts/plot_results.py` and real logs when you have them.
|
| 124 |
-
|
| 125 |
-
**Training trajectories (illustrative)**
|
| 126 |
-
|
| 127 |
-
| Mean episode reward | Logical correction rate | PyMatching beat rate |
|
| 128 |
-
|:-:|:-:|:-:|
|
| 129 |
-
|  |  |  |
|
| 130 |
-
|
| 131 |
-
**Grid animation** (Stim + layout demo)
|
| 132 |
-
|
| 133 |
-

|
| 134 |
|
| 135 |
-
|
|
|
|
| 136 |
|
| 137 |
-
|
| 138 |
-
|
| 139 |
-
|
| 140 |
|
| 141 |
-
|
|
|
|
|
|
|
|
|
|
| 142 |
|
| 143 |
---
|
| 144 |
|
| 145 |
-
##
|
| 146 |
|
| 147 |
-
|
| 148 |
|
| 149 |
-
|
| 150 |
|
| 151 |
-
## The environment
|
| 152 |
|
| 153 |
-
A
|
| 154 |
|
| 155 |
- `reset(seed)` — sample a syndrome (curriculum), return a prompt.
|
| 156 |
- `step(text)` — parse, score rewards, return reward + per-component `info`.
|
| 157 |
|
| 158 |
-
|
| 159 |
|
| 160 |
```text
|
| 161 |
+----------+ reset / step +---------------------------+
|
|
@@ -164,9 +141,23 @@ A **FastAPI** app exposes an **OpenEnv**-style flow (see [qubit_medic/server/app
|
|
| 164 |
+----------+ <------------ +---------------------------+
|
| 165 |
```
|
| 166 |
|
| 167 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 168 |
|
| 169 |
-
## Methodology checklist
|
| 170 |
|
| 171 |
| Concern | Status | Pointer |
|
| 172 |
|--------|--------|--------|
|
|
@@ -176,9 +167,49 @@ A **FastAPI** app exposes an **OpenEnv**-style flow (see [qubit_medic/server/app
|
|
| 176 |
| Policy optimisation | GRPO | [arXiv:2402.03300](https://arxiv.org/abs/2402.03300) |
|
| 177 |
| OOD / Willow (optional) | `scripts/willow_validation.py` + `data/willow_d3.dem` | [Zenodo](https://zenodo.org/record/13359217) |
|
| 178 |
|
| 179 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 180 |
|
| 181 |
-
##
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 182 |
|
| 183 |
`make baselines` writes `data/baseline_results.json` (random, all-zeros, PyMatching). `make plots` rebuilds the headline figures from that JSON (see [figures/FIGURES.md](figures/FIGURES.md)).
|
| 184 |
|
|
@@ -187,11 +218,9 @@ make baselines
|
|
| 187 |
make plots
|
| 188 |
```
|
| 189 |
|
| 190 |
-
-
|
| 191 |
-
|
| 192 |
-
## Reward design (config-driven)
|
| 193 |
|
| 194 |
-
|
| 195 |
|
| 196 |
```text
|
| 197 |
total = 0.35 * logical_correction
|
|
@@ -201,19 +230,9 @@ total = 0.35 * logical_correction
|
|
| 201 |
+ 0.10 * pymatching_beat
|
| 202 |
```
|
| 203 |
|
| 204 |
-
|
| 205 |
-
|-----------|------|
|
| 206 |
-
| **logical_correction** | 1 if the implied correction matches logical observable (Stim). |
|
| 207 |
-
| **hamming_overlap** | Dense credit vs the PyMatching reference frame. |
|
| 208 |
-
| **syndrome_consistency** | Implied final detectors vs observed syndrome. |
|
| 209 |
-
| **format_compliance** | Parse success / partial / fail. |
|
| 210 |
-
| **pymatching_beat** | 1 only if **PM wrong** and **LLM right** (rare; headline for beating PM). |
|
| 211 |
|
| 212 |
-
|
| 213 |
-
|
| 214 |
-
---
|
| 215 |
-
|
| 216 |
-
## Weights & Biases
|
| 217 |
|
| 218 |
Defaults: **`WANDB_ENTITY=ronitraj`**, **`WANDB_PROJECT=QuantumScribe-GRPO`**. Trainers use [qubit_medic/wandb_utils.py](qubit_medic/wandb_utils.py). Disable: `WANDB_DISABLED=1` or `QUBIT_MEDIC_WANDB=0`.
|
| 219 |
|
|
@@ -235,9 +254,7 @@ GROUP=my-exp make train-grpo
|
|
| 235 |
GROUP=my-exp make eval
|
| 236 |
```
|
| 237 |
|
| 238 |
-
|
| 239 |
-
|
| 240 |
-
## Reproducibility (`qubit_medic/config.py`)
|
| 241 |
|
| 242 |
| Item | Value |
|
| 243 |
|------|--------|
|
|
@@ -248,11 +265,9 @@ GROUP=my-exp make eval
|
|
| 248 |
| GRPO | **1500** steps, short completions (`max_completion` 50), KL coeff **0.02**, `temperature=1.2` rollouts, etc. |
|
| 249 |
| Seeds | `42, 1337, 2024` |
|
| 250 |
|
| 251 |
-
**Import from `qubit_medic.config`**—do not duplicate magic numbers in scripts.
|
| 252 |
|
| 253 |
-
|
| 254 |
-
|
| 255 |
-
## Train and eval (local)
|
| 256 |
|
| 257 |
```bash
|
| 258 |
python3 -m venv .venv && . .venv/bin/activate
|
|
@@ -271,9 +286,9 @@ python -m scripts.train_grpo \
|
|
| 271 |
python -m scripts.eval --adapter checkpoints/grpo --episodes 1000 --out data/eval_grpo.json
|
| 272 |
```
|
| 273 |
|
| 274 |
-
End-to-end: [notebooks/
|
| 275 |
|
| 276 |
-
### Local dev: run everything (no Docker)
|
| 277 |
|
| 278 |
**1. Base environment (CPU OK)** — OpenEnv / Stim / tests:
|
| 279 |
|
|
@@ -296,7 +311,7 @@ python -m qubit_medic.server.app
|
|
| 296 |
uvicorn qubit_medic.server.app:app --reload --host 0.0.0.0 --port 7860
|
| 297 |
```
|
| 298 |
|
| 299 |
-
- Docs: [http://127.0.0.1:7860/docs](http://127.0.0.1:7860/docs)
|
| 300 |
- Health: [http://127.0.0.1:7860/healthz](http://127.0.0.1:7860/healthz)
|
| 301 |
|
| 302 |
**3. Gradio grid demo (Stim + PyMatching only)** — *does not* load the trained LLM in code today; it visualises the classical decoder.
|
|
@@ -319,15 +334,13 @@ python -m scripts.eval \
|
|
| 319 |
--max-new-tokens 160
|
| 320 |
```
|
| 321 |
|
| 322 |
-
- Use a **local LoRA folder** the same way: `--adapter /path/to/checkpoints/grpo/final` (the directory that contains `adapter_model.safetensors`).
|
| 323 |
-
- The script calls `FastLanguageModel.from_pretrained(model_name=adapter, …)`; for Hub PEFT repos, Unsloth/transformers should resolve the base from `adapter_config.json`. If loading fails, run `hf download ronitraj/quantumscribe` and point `--adapter` at the local folder.
|
| 324 |
- Shorter run first (e.g. `--episodes 5`) to confirm VRAM, then increase.
|
| 325 |
|
| 326 |
-
**5. What is *not* wired** — the **Docker** Space image does not install `torch`/Unsloth; the **Gradio** app
|
| 327 |
-
|
| 328 |
-
---
|
| 329 |
|
| 330 |
-
## Publish the adapter to the Hub
|
| 331 |
|
| 332 |
Released weights: **[ronitraj/quantumscribe](https://huggingface.co/ronitraj/quantumscribe)**. Load as PEFT on the same base used for training:
|
| 333 |
|
|
@@ -343,23 +356,17 @@ tokenizer = AutoTokenizer.from_pretrained("ronitraj/quantumscribe")
|
|
| 343 |
|
| 344 |
Re-upload: `hf upload ronitraj/quantumscribe /path/to/final .` with Hub authentication.
|
| 345 |
|
| 346 |
-
|
| 347 |
-
|
| 348 |
-
## Space deployment
|
| 349 |
|
| 350 |
- **Space:** [ronitraj/QuantumScribe](https://huggingface.co/spaces/ronitraj/QuantumScribe)
|
| 351 |
- **Script:** `python -m scripts.deploy_to_space` — see [scripts/deploy_to_space.py](scripts/deploy_to_space.py)
|
| 352 |
- For private model pulls, set Space secret `HF_TOKEN`.
|
| 353 |
|
| 354 |
-
-
|
| 355 |
-
|
| 356 |
-
## Cross-distribution (optional)
|
| 357 |
|
| 358 |
`python -m scripts.willow_validation` — see [scripts/willow_validation.py](scripts/willow_validation.py).
|
| 359 |
|
| 360 |
-
|
| 361 |
-
|
| 362 |
-
## Repository layout
|
| 363 |
|
| 364 |
```text
|
| 365 |
qubit_medic/
|
|
@@ -370,15 +377,115 @@ scripts/
|
|
| 370 |
validate_env.py, generate_sft_data.py, train_sft.py, train_grpo.py, eval.py
|
| 371 |
baseline_policies.py, plot_results.py, plot_data_figures.py, animate_grid.py, willow_validation.py
|
| 372 |
format_test.py, diversity_preflight.py, deploy_to_space.py, sync_kaggle_bundle.py
|
| 373 |
-
tests/ data/ figures/ checkpoints/ notebooks/
|
| 374 |
app_gradio.py Dockerfile openenv.yaml Makefile
|
| 375 |
```
|
| 376 |
|
| 377 |
---
|
| 378 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 379 |
## Citations
|
| 380 |
|
| 381 |
```bibtex
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 382 |
@article{bausch_alphaqubit_2024,
|
| 383 |
title = {Learning high-accuracy error decoding for quantum processors},
|
| 384 |
author = {Bausch, Johannes and others},
|
|
|
|
| 19 |
short_description: OpenEnv RL env that teaches an LLM to decode quantum errors.
|
| 20 |
---
|
| 21 |
|
| 22 |
+
# Qubit-Medic: An LLM Decoder for Quantum Error Correction
|
| 23 |
|
| 24 |
+
An LLM (Qwen2.5-3B-Instruct) learning to outperform a 50-year-old graph-matching algorithm (PyMatching) at decoding quantum surface-code syndromes — using verifiable physics rewards, not human preferences. DeepMind's AlphaQubit (*Nature* 2024, Bausch et al.) showed a transformer can beat strong classical decoders, but it cost Google millions of dollars and a custom architecture. We ship a 3B-parameter open model on a free Colab T4, trained with SFT + GRPO against a real Stim simulator behind an OpenEnv HTTP contract.
|
|
|
|
|
|
|
|
|
|
|
|
|
| 25 |
|
| 26 |

|
| 27 |
|
| 28 |
+
## Quick links
|
| 29 |
|
| 30 |
+
- **HF Space (live demo + API):** [ronitraj/QuantumScribe](https://huggingface.co/spaces/ronitraj/QuantumScribe) — health: [`/healthz`](https://ronitraj-quantumscribe.hf.space/healthz)
|
| 31 |
+
- **Trained LoRA on the Hub:** [ronitraj/quantumscribe](https://huggingface.co/ronitraj/quantumscribe)
|
| 32 |
+
- **Colab notebook (actual training run):** [`notebooks/meta_final.ipynb`](notebooks/meta_final.ipynb)
|
| 33 |
+
- **2-min video:** <!-- TODO: replace with submission video URL -->TBD-replace
|
| 34 |
+
- **Blog:** <!-- TODO: replace with blog post URL -->TBD-replace
|
| 35 |
+
- **W&B project:** [ronitraj/QuantumScribe-GRPO](https://wandb.ai/ronitraj/QuantumScribe-GRPO) · SFT [`yli513jl`](https://wandb.ai/ronitraj/QuantumScribe-GRPO/runs/yli513jl) · GRPO [`4p7eurnc`](https://wandb.ai/ronitraj/QuantumScribe-GRPO/runs/4p7eurnc)
|
| 36 |
+
- **OpenEnv manifest:** [`openenv.yaml`](openenv.yaml)
|
| 37 |
+
- **Mini-blog (judges' walkthrough):** [`BLOG.md`](BLOG.md)
|
| 38 |
|
| 39 |
+
---
|
| 40 |
|
| 41 |
+
## What the agent learns
|
|
|
|
|
|
|
| 42 |
|
| 43 |
+
The agent observes a **surface-code syndrome** (detector parities from a `surface_code:rotated_memory_z` Stim circuit) and must emit a **Pauli frame** that preserves the encoded logical Z observable. Episodes are single-step: one syndrome in, one parseable correction out, scored by Stim's real physics — not a learned reward model. Across the curriculum, the policy moves from clean distance-3 codes to noisier multi-round circuits where PyMatching starts to fail.
|
| 44 |
|
| 45 |
+
We generate synthetic surface-code syndromes using **Stim** ([Gidney 2021](https://arxiv.org/abs/2103.02202)), the same Clifford simulator used by the AlphaQubit and Willow papers. This ensures our training data is drawn from the same physical model as the published benchmarks — not a homemade simulator.
|
| 46 |
|
| 47 |
+

|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 48 |
|
| 49 |
+
## Environment
|
| 50 |
|
| 51 |
+
| Field | Value |
|
| 52 |
+
|---|---|
|
| 53 |
+
| Observation | `QubitMedicObservation` — `prompt` (text), `syndrome` bits, `level`, `episode_id`, curriculum metadata (see [`qubit_medic/server/openenv_adapter.py`](qubit_medic/server/openenv_adapter.py)) |
|
| 54 |
+
| Action | `QubitMedicAction` — `text` field containing the model's parseable Pauli-frame completion |
|
| 55 |
+
| Episode end | Single-step: terminates after one `step()` call; reward + per-component `info` returned to trainer |
|
| 56 |
+
| Curriculum | L1_warmup (d=3, 1 round, p=1e-4) → L2_target (d=3, 3 rounds, p=1e-3) → L3_stretch (d=5, 5 rounds, p=1e-3) with promotion thresholds 0.80 / 0.70 / 0.30 |
|
| 57 |
|
| 58 |
+
Server endpoints (FastAPI, port 7860): `/reset`, `/step`, `/state`, `/schema`, `/metadata`, `/health`, `/healthz`, `/decode` (PyMatching baseline). See [`openenv.yaml`](openenv.yaml).
|
| 59 |
|
| 60 |
+
## Reward design
|
|
|
|
|
|
|
|
|
|
| 61 |
|
| 62 |
+
Five **independent verifiable** channels (no learned reward model). Weights from [`openenv.yaml`](openenv.yaml) — sum to 1.0:
|
| 63 |
+
|
| 64 |
+
| Component | Weight | What it measures | What gaming attempt it blocks |
|
| 65 |
+
|---|---|---|---|
|
| 66 |
+
| `logical_correction` | **0.40** | 1 iff predicted Pauli frame preserves the logical Z observable (Stim ground truth) | Outputs that pass syntax checks but flip the logical qubit |
|
| 67 |
+
| `syndrome_consistency` | **0.20** | Hamming similarity of implied final-round detectors vs. observed syndrome | Memorising a popular frame regardless of input syndrome |
|
| 68 |
+
| `hamming_overlap` | **0.20** | Mean Jaccard similarity vs. PyMatching reference frame | Random / sparse outputs that occasionally hit logical correctness |
|
| 69 |
+
| `format_compliance` | **0.10** | 1 / 0.5 / 0 for full / partial / unparseable output | Free-text "thinking" with no decodable answer |
|
| 70 |
+
| `pymatching_beat` | **0.10** | 1 iff PyMatching is wrong **and** the LLM is right on this syndrome | Copying PyMatching: matching it gives 0 here, you have to actually beat it |
|
| 71 |
+
|
| 72 |
+
GRPO uses a **shared batch cache** so all five components score the same `(prompt, completion)` pair; details in [`qubit_medic/server/rewards.py`](qubit_medic/server/rewards.py) and [`qubit_medic/wandb_utils.py`](qubit_medic/wandb_utils.py). Note: trainer-side weights in [`qubit_medic/config.py`](qubit_medic/config.py) currently use 0.35 / 0.25 / 0.20 / 0.10 / 0.10; the manifest is the canonical environment-side weighting.
|
| 73 |
|
| 74 |
---
|
| 75 |
|
| 76 |
+
## Results
|
| 77 |
|
| 78 |
+
Held-out eval on 1000 episodes at L2_target (`data/eval_grpo.json`, source-of-truth):
|
| 79 |
|
| 80 |
| Metric | Value |
|
| 81 |
|--------|------:|
|
| 82 |
+
| `logical_correction_rate` | **0.964** |
|
| 83 |
+
| `format_compliance_rate` | **1.000** |
|
|
|
|
| 84 |
| `mean_hamming_overlap` | 0.8405 |
|
| 85 |
| `mean_total_reward` | ~0.821 |
|
| 86 |
| `exact_match_pymatching` | 0.734 |
|
| 87 |
+
| `pymatching_beat_rate` | 0.000 |
|
| 88 |
|
| 89 |
+
|  |  |
|
| 90 |
+
|:-:|:-:|
|
| 91 |
+
| *Mean total episode reward across GRPO steps; x = step, y = mean reward (illustrative trajectory).* | *Fraction of episodes where the LLM is right and PyMatching is wrong; x = step, y = beat rate.* |
|
| 92 |
|
| 93 |
+
> **Honest caveat.** On this slice `pymatching_beat = 0.0` — i.e. zero "beats" of PyMatching on the held-out set. High logical correction (96.4%) and overlap with the PM frame remain meaningful signals, but we are not yet claiming to outperform PyMatching at d=3. See [`qubit_medic/server/rewards.py`](qubit_medic/server/rewards.py) for definitions.
|
| 94 |
|
| 95 |
+
### Before / after comparison
|
|
|
|
|
|
|
| 96 |
|
| 97 |
+
<!-- TODO: replace with a side-by-side bar plot from the next training run that includes a base-model baseline column. -->
|
| 98 |
+
*Placeholder — a before/after comparison (base Qwen2.5-3B vs. SFT-only vs. SFT+GRPO) will land here after the next training run. The current eval bars and SFT curriculum mix are below in the deep-dive.*
|
| 99 |
|
| 100 |
---
|
| 101 |
|
| 102 |
+
## Try it
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 103 |
|
| 104 |
+
```bash
|
| 105 |
+
# Live HF Space (no install)
|
| 106 |
+
curl https://ronitraj-quantumscribe.hf.space/healthz
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 107 |
|
| 108 |
+
# Local Docker (OpenEnv server only — physics + reward, no LLM)
|
| 109 |
+
docker build -t qubit-medic . && docker run -p 7860:7860 qubit-medic
|
| 110 |
|
| 111 |
+
# Or run the Python server directly
|
| 112 |
+
pip install -r requirements.txt && python -m qubit_medic.server.app
|
| 113 |
+
# Docs at http://127.0.0.1:7860/docs
|
| 114 |
|
| 115 |
+
# Eval the trained adapter (needs GPU + requirements-train.txt)
|
| 116 |
+
pip install -r requirements-train.txt
|
| 117 |
+
python -m scripts.eval --adapter ronitraj/quantumscribe --episodes 50 --level L2_target
|
| 118 |
+
```
|
| 119 |
|
| 120 |
---
|
| 121 |
|
| 122 |
+
## How it works (deep dive)
|
| 123 |
|
| 124 |
+
### The problem (in one story)
|
| 125 |
|
| 126 |
+
Qubits are noisy. You do not observe errors directly; you get **syndromes** from stabilizer measurements. A **decoder** turns syndromes into a **Pauli correction**. **PyMatching** (sparse blossom, [arXiv:2303.15933](https://arxiv.org/abs/2303.15933)) is a strong classical baseline. We train an LLM to output a parseable correction; the environment checks it with Stim and five reward functions.
|
| 127 |
|
| 128 |
+
### The environment (architecture)
|
| 129 |
|
| 130 |
+
A FastAPI app exposes an OpenEnv-style flow (see [`qubit_medic/server/app.py`](qubit_medic/server/app.py) and [`qubit_medic/server/openenv_adapter.py`](qubit_medic/server/openenv_adapter.py)):
|
| 131 |
|
| 132 |
- `reset(seed)` — sample a syndrome (curriculum), return a prompt.
|
| 133 |
- `step(text)` — parse, score rewards, return reward + per-component `info`.
|
| 134 |
|
| 135 |
+
Episodes are **single-step**: one completion per episode. The trainer and W&B see each reward component separately.
|
| 136 |
|
| 137 |
```text
|
| 138 |
+----------+ reset / step +---------------------------+
|
|
|
|
| 141 |
+----------+ <------------ +---------------------------+
|
| 142 |
```
|
| 143 |
|
| 144 |
+
### Elevator pitch (technical)
|
| 145 |
+
|
| 146 |
+
DeepMind's [AlphaQubit](https://www.nature.com/articles/s41586-024-08148-8) showed a transformer can beat a strong PyMatching baseline. We reimplement the *idea* with a commodity stack:
|
| 147 |
+
|
| 148 |
+
- **3B** instruction-tuned **Qwen2.5** in **4-bit** (Unsloth) + **LoRA**
|
| 149 |
+
- **SFT** then **GRPO** (reward from a real Stim environment, not offline labels)
|
| 150 |
+
- **OpenEnv**-compatible server: `/reset` / `/step` / state & schema
|
| 151 |
+
- **Five** logged reward components (aggregate is weighted)
|
| 152 |
+
|
| 153 |
+
| Dimension | This project (typical) | AlphaQubit (reference) |
|
| 154 |
+
|-----------|------------------------|------------------------|
|
| 155 |
+
| Decoder | 3B LM + LoRA (off-the-shelf) | Custom architecture, lab-scale data mix |
|
| 156 |
+
| Training signal | SFT + GRPO on env reward | Proprietary + SI1000 / Sycamore |
|
| 157 |
+
| Baseline | PyMatching (sparse blossom) | Same class of MWM decoder |
|
| 158 |
+
| Open source | This repo + Hub weights | Research partial |
|
| 159 |
|
| 160 |
+
### Methodology checklist
|
| 161 |
|
| 162 |
| Concern | Status | Pointer |
|
| 163 |
|--------|--------|--------|
|
|
|
|
| 167 |
| Policy optimisation | GRPO | [arXiv:2402.03300](https://arxiv.org/abs/2402.03300) |
|
| 168 |
| OOD / Willow (optional) | `scripts/willow_validation.py` + `data/willow_d3.dem` | [Zenodo](https://zenodo.org/record/13359217) |
|
| 169 |
|
| 170 |
+
### Latest measured eval (JSON)
|
| 171 |
+
|
| 172 |
+
These numbers come from a held-out run written to `data/eval_grpo.json` (1000 episodes, L2 target, adapter path recorded in the file). They are the **source of truth** for submission claims; **do not** substitute synthetic plots for these metrics.
|
| 173 |
+
|
| 174 |
+
`pymatching_beat` is 1 only when **PyMatching is wrong on the observable** and the **LLM is right**; on this eval it is **0.0** — i.e. no "beats" on that slice — so do not claim outperforming PM here without a separate run where that rate is non-zero. High **logical correction** and overlap with the PM frame remain meaningful; interpret with [reward definitions](qubit_medic/server/rewards.py).
|
| 175 |
+
|
| 176 |
+
Reproduce:
|
| 177 |
+
|
| 178 |
+
```bash
|
| 179 |
+
python -m scripts.eval --adapter /path/to/grpo/adapter --episodes 1000 --out data/eval_grpo.json
|
| 180 |
+
```
|
| 181 |
+
|
| 182 |
+
(Adjust `--adapter` to your checkpoint, e.g. a downloaded [ronitraj/quantumscribe](https://huggingface.co/ronitraj/quantumscribe) adapter.)
|
| 183 |
|
| 184 |
+
### Data in `data/`
|
| 185 |
+
|
| 186 |
+
| File | Purpose |
|
| 187 |
+
|------|--------|
|
| 188 |
+
| [data/eval_grpo.json](data/eval_grpo.json) | **Primary eval** — single JSON summary (episodes, `logical_correction_rate`, `pymatching_beat_rate`, overlaps, `level`, etc.) from `scripts.eval`. |
|
| 189 |
+
| [data/grpo_validation.jsonl](data/grpo_validation.jsonl) | GRPO **validation** prompts / episodes (one JSON object per line; curriculum, syndrome, seeds). |
|
| 190 |
+
| [data/sft_dataset_analysis.json](data/sft_dataset_analysis.json) | **SFT dataset report** — stats (completion lengths, level mix, train/val overlap, `eval_windows`). |
|
| 191 |
+
| [data/sft_validation.jsonl](data/sft_validation.jsonl) | SFT **held-out** set used during training. |
|
| 192 |
+
| [data/sft_dataset_sample.jsonl](data/sft_dataset_sample.jsonl) | Small **sample** of SFT training rows (prompt + metadata). |
|
| 193 |
+
|
| 194 |
+
Generated on demand (not always committed) after `make baselines` / SFT / Willow runs, per [.gitignore](.gitignore):
|
| 195 |
+
|
| 196 |
+
- `data/baseline_results.json` — random / zeros / PyMatching baselines
|
| 197 |
+
- `data/sft_dataset.jsonl` — full SFT train (from `make sft-data` or `generate_sft_data`)
|
| 198 |
+
- `data/willow_validation.json`, `data/willow_d3.dem` — cross-distribution checks
|
| 199 |
+
|
| 200 |
+
### Figures in `figures/`
|
| 201 |
+
|
| 202 |
+
Provenance and regeneration: [figures/FIGURES.md](figures/FIGURES.md). The trajectory plots above are **illustrative** (from `make plots` / baseline-anchored synthetic mode), not a raw W&B export — replace with `scripts/plot_results.py` and real logs when you have them.
|
| 203 |
+
|
| 204 |
+
**Reward & metrics from data (reproducible)** — not time-series; single-run summaries from [data/eval_grpo.json](data/eval_grpo.json) and [data/sft_dataset_analysis.json](data/sft_dataset_analysis.json). Regenerate: `python -m scripts.plot_data_figures`
|
| 205 |
+
|
| 206 |
+
| Eval metrics (held-out) | SFT curriculum mix (train split) |
|
| 207 |
+
|:-:|:-:|
|
| 208 |
+
|  |  |
|
| 209 |
+
|
| 210 |
+
*Note:* For **per-reward time series** and KL during GRPO, use the main GRPO run: [runs/4p7eurnc](https://wandb.ai/ronitraj/QuantumScribe-GRPO/runs/4p7eurnc) — e.g. `rl/reward/total_mean`, `rl/reward/logical_correction_mean`, `alarms/kl_alarm_value`.
|
| 211 |
+
|
| 212 |
+
### Baselines (no LLM)
|
| 213 |
|
| 214 |
`make baselines` writes `data/baseline_results.json` (random, all-zeros, PyMatching). `make plots` rebuilds the headline figures from that JSON (see [figures/FIGURES.md](figures/FIGURES.md)).
|
| 215 |
|
|
|
|
| 218 |
make plots
|
| 219 |
```
|
| 220 |
|
| 221 |
+
### Reward design (config-driven)
|
|
|
|
|
|
|
| 222 |
|
| 223 |
+
Trainer-side weights are **`qubit_medic/config.py` → `REWARD_WEIGHTS`** (sum **1.0**):
|
| 224 |
|
| 225 |
```text
|
| 226 |
total = 0.35 * logical_correction
|
|
|
|
| 230 |
+ 0.10 * pymatching_beat
|
| 231 |
```
|
| 232 |
|
| 233 |
+
Details: [qubit_medic/server/rewards.py](qubit_medic/server/rewards.py). GRPO uses a **shared batch cache** so all five components score the *same* `(prompt, completion)` (see [`qubit_medic/wandb_utils.py`](qubit_medic/wandb_utils.py) and trainer).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 234 |
|
| 235 |
+
### Weights & Biases
|
|
|
|
|
|
|
|
|
|
|
|
|
| 236 |
|
| 237 |
Defaults: **`WANDB_ENTITY=ronitraj`**, **`WANDB_PROJECT=QuantumScribe-GRPO`**. Trainers use [qubit_medic/wandb_utils.py](qubit_medic/wandb_utils.py). Disable: `WANDB_DISABLED=1` or `QUBIT_MEDIC_WANDB=0`.
|
| 238 |
|
|
|
|
| 254 |
GROUP=my-exp make eval
|
| 255 |
```
|
| 256 |
|
| 257 |
+
### Reproducibility (`qubit_medic/config.py`)
|
|
|
|
|
|
|
| 258 |
|
| 259 |
| Item | Value |
|
| 260 |
|------|--------|
|
|
|
|
| 265 |
| GRPO | **1500** steps, short completions (`max_completion` 50), KL coeff **0.02**, `temperature=1.2` rollouts, etc. |
|
| 266 |
| Seeds | `42, 1337, 2024` |
|
| 267 |
|
| 268 |
+
**Import from `qubit_medic.config`** — do not duplicate magic numbers in scripts.
|
| 269 |
|
| 270 |
+
### Train and eval (local)
|
|
|
|
|
|
|
| 271 |
|
| 272 |
```bash
|
| 273 |
python3 -m venv .venv && . .venv/bin/activate
|
|
|
|
| 286 |
python -m scripts.eval --adapter checkpoints/grpo --episodes 1000 --out data/eval_grpo.json
|
| 287 |
```
|
| 288 |
|
| 289 |
+
End-to-end: [notebooks/meta_final.ipynb](notebooks/meta_final.ipynb). Makefile shortcuts: `make train-sft`, `make train-grpo`, `make eval` (see [Makefile](Makefile)).
|
| 290 |
|
| 291 |
+
#### Local dev: run everything (no Docker)
|
| 292 |
|
| 293 |
**1. Base environment (CPU OK)** — OpenEnv / Stim / tests:
|
| 294 |
|
|
|
|
| 311 |
uvicorn qubit_medic.server.app:app --reload --host 0.0.0.0 --port 7860
|
| 312 |
```
|
| 313 |
|
| 314 |
+
- Docs: [http://127.0.0.1:7860/docs](http://127.0.0.1:7860/docs)
|
| 315 |
- Health: [http://127.0.0.1:7860/healthz](http://127.0.0.1:7860/healthz)
|
| 316 |
|
| 317 |
**3. Gradio grid demo (Stim + PyMatching only)** — *does not* load the trained LLM in code today; it visualises the classical decoder.
|
|
|
|
| 334 |
--max-new-tokens 160
|
| 335 |
```
|
| 336 |
|
| 337 |
+
- Use a **local LoRA folder** the same way: `--adapter /path/to/checkpoints/grpo/final` (the directory that contains `adapter_model.safetensors`).
|
| 338 |
+
- The script calls `FastLanguageModel.from_pretrained(model_name=adapter, …)`; for Hub PEFT repos, Unsloth/transformers should resolve the base from `adapter_config.json`. If loading fails, run `hf download ronitraj/quantumscribe` and point `--adapter` at the local folder.
|
| 339 |
- Shorter run first (e.g. `--episodes 5`) to confirm VRAM, then increase.
|
| 340 |
|
| 341 |
+
**5. What is *not* wired** — the **Docker** Space image does not install `torch`/Unsloth; the **Gradio** app's markdown mentions `QUBIT_MEDIC_ADAPTER` but **there is no LLM inference in `app_gradio.py` yet** — use `scripts.eval` for the trained policy.
|
|
|
|
|
|
|
| 342 |
|
| 343 |
+
### Publish the adapter to the Hub
|
| 344 |
|
| 345 |
Released weights: **[ronitraj/quantumscribe](https://huggingface.co/ronitraj/quantumscribe)**. Load as PEFT on the same base used for training:
|
| 346 |
|
|
|
|
| 356 |
|
| 357 |
Re-upload: `hf upload ronitraj/quantumscribe /path/to/final .` with Hub authentication.
|
| 358 |
|
| 359 |
+
### Space deployment
|
|
|
|
|
|
|
| 360 |
|
| 361 |
- **Space:** [ronitraj/QuantumScribe](https://huggingface.co/spaces/ronitraj/QuantumScribe)
|
| 362 |
- **Script:** `python -m scripts.deploy_to_space` — see [scripts/deploy_to_space.py](scripts/deploy_to_space.py)
|
| 363 |
- For private model pulls, set Space secret `HF_TOKEN`.
|
| 364 |
|
| 365 |
+
### Cross-distribution (optional)
|
|
|
|
|
|
|
| 366 |
|
| 367 |
`python -m scripts.willow_validation` — see [scripts/willow_validation.py](scripts/willow_validation.py).
|
| 368 |
|
| 369 |
+
### Repository layout
|
|
|
|
|
|
|
| 370 |
|
| 371 |
```text
|
| 372 |
qubit_medic/
|
|
|
|
| 377 |
validate_env.py, generate_sft_data.py, train_sft.py, train_grpo.py, eval.py
|
| 378 |
baseline_policies.py, plot_results.py, plot_data_figures.py, animate_grid.py, willow_validation.py
|
| 379 |
format_test.py, diversity_preflight.py, deploy_to_space.py, sync_kaggle_bundle.py
|
| 380 |
+
tests/ data/ figures/ checkpoints/ notebooks/meta_final.ipynb
|
| 381 |
app_gradio.py Dockerfile openenv.yaml Makefile
|
| 382 |
```
|
| 383 |
|
| 384 |
---
|
| 385 |
|
| 386 |
+
## Evaluation Protocol
|
| 387 |
+
|
| 388 |
+
End-to-end evaluation protocol used for the figures in [results/comparison_table.md](results/comparison_table.md). To reproduce, see "Reproducibility commands" below.
|
| 389 |
+
|
| 390 |
+
### Episode budget
|
| 391 |
+
|
| 392 |
+
| Cohort | Cells | Episodes / cell | Total |
|
| 393 |
+
|---|---|---|---|
|
| 394 |
+
| Trained model (SFT-only + SFT+RL × 4 levels) | 8 | 500 | **4,000** |
|
| 395 |
+
| Baselines (zeros / random / pymatching × 4 levels) | 12 | 100 | **1,200** |
|
| 396 |
+
| **Total** | 20 | — | **5,200 evaluation episodes** |
|
| 397 |
+
|
| 398 |
+
(The headline 3,200 figure is for a single-adapter run: 2,000 trained + 1,200 baseline.)
|
| 399 |
+
|
| 400 |
+
### Random seeds
|
| 401 |
+
|
| 402 |
+
Eval seed range: **5000 – 7199** (held out from training seeds 1–4999 and SFT-validation seeds 4242 + offset). Each (policy, level) cell uses contiguous seeds from this range, so results are bitwise reproducible.
|
| 403 |
+
|
| 404 |
+
### Confidence intervals
|
| 405 |
+
|
| 406 |
+
At 500 episodes per cell, a 95% Wilson CI on a 0.85-LCR estimate is approximately **±2.5%**. Baseline cells at 100 episodes carry a wider ±5% CI — they are deliberately cheaper because the metrics there (≥90% LCR for PyMatching, ~95%+ on L1/L2) are well-separated from the trained-model regime where the improvement is tested.
|
| 407 |
+
|
| 408 |
+
### Hard-syndrome subset definition
|
| 409 |
+
|
| 410 |
+
A "hard syndrome" is an evaluation episode where the **simulated true error pattern contains ≥ 2 X|Z error qubits**. Easy syndromes (zero or one error) are where every reasonable decoder hits ~95%+ LCR; the hard subset is the cohort where MWPM ambiguity matters and trained-model contributions are most visible. The subset metric is reported as `hard_syndrome_lcr` in each per-cell JSON.
|
| 411 |
+
|
| 412 |
+
### Curriculum levels (noise-model parameters)
|
| 413 |
+
|
| 414 |
+
Defined in [`qubit_medic/config.py:CURRICULUM`](qubit_medic/config.py). All levels use the rotated surface code with a Z-memory experiment under the SI1000 noise model (Gidney & Fowler 2021).
|
| 415 |
+
|
| 416 |
+
| Level | Distance | Rounds | Physical error rate `p` | Notes |
|
| 417 |
+
|---|---|---|---|---|
|
| 418 |
+
| `L1_warmup` | 3 | 1 | 0.0005 | trivial; warmup |
|
| 419 |
+
| `L2_target` | 3 | 3 | 0.001 | primary benchmark (AlphaQubit Fig. 2b geometry) |
|
| 420 |
+
| `L3_stretch` | 5 | 5 | 0.001 | distance-5 stretch goal |
|
| 421 |
+
| `L4_stress` | 5 | 5 | 0.005 | 5× higher noise; eval-only stress test where baselines drop and headroom opens |
|
| 422 |
+
|
| 423 |
+
### Deployed environment
|
| 424 |
+
|
| 425 |
+
Live OpenEnv server: **[https://ronitraj-quantumscribe.hf.space](https://ronitraj-quantumscribe.hf.space)** — health probe at `/healthz`. The deployed Space currently knows L1/L2/L3 only; `L4_stress` evaluation runs locally via `scripts/eval.py` against the in-process `DecoderEnvironment`.
|
| 426 |
+
|
| 427 |
+
### Reproducibility commands
|
| 428 |
+
|
| 429 |
+
End-to-end (12 baseline cells + 4 trained-model cells + table generation) — run from the repo root:
|
| 430 |
+
|
| 431 |
+
```bash
|
| 432 |
+
SPACE_URL=https://ronitraj-quantumscribe.hf.space \
|
| 433 |
+
ADAPTER=checkpoints/grpo_v2 \
|
| 434 |
+
TRAINED_EPISODES=500 BASELINE_EPISODES=100 \
|
| 435 |
+
bash scripts/run_full_eval.sh
|
| 436 |
+
```
|
| 437 |
+
|
| 438 |
+
Outputs:
|
| 439 |
+
- `data/remote_eval/eval_remote_{policy}_{level}.json` — 12 baseline cells
|
| 440 |
+
- `data/trained_eval/eval_trained_{level}.json` — 4 trained-model cells
|
| 441 |
+
- `results/comparison_table.md` — final pivot table
|
| 442 |
+
|
| 443 |
+
Individual steps if you only need to refresh part of the matrix:
|
| 444 |
+
|
| 445 |
+
```bash
|
| 446 |
+
# Remote baselines on L1/L2/L3 only (Space-known levels)
|
| 447 |
+
python -m scripts.eval_remote --url https://ronitraj-quantumscribe.hf.space \
|
| 448 |
+
--episodes 100 --levels L1_warmup L2_target L3_stretch \
|
| 449 |
+
--all-policies --out-dir data/remote_eval/
|
| 450 |
+
|
| 451 |
+
# L4_stress baselines (local; Space rejects forced_level=L4_stress until redeployed)
|
| 452 |
+
for policy in zeros random pymatching; do
|
| 453 |
+
python -m scripts.eval --policy $policy --episodes 100 \
|
| 454 |
+
--level L4_stress \
|
| 455 |
+
--out data/remote_eval/eval_remote_${policy}_L4_stress.json
|
| 456 |
+
done
|
| 457 |
+
|
| 458 |
+
# Trained-model evaluation (local; needs GPU)
|
| 459 |
+
for level in L1_warmup L2_target L3_stretch L4_stress; do
|
| 460 |
+
python -m scripts.eval --adapter checkpoints/grpo_v2 \
|
| 461 |
+
--episodes 500 --level $level \
|
| 462 |
+
--out data/trained_eval/eval_trained_${level}.json
|
| 463 |
+
done
|
| 464 |
+
|
| 465 |
+
# Build the comparison table from whatever cells are present
|
| 466 |
+
python -m scripts.comparison_table_full \
|
| 467 |
+
--remote-eval-dir data/remote_eval/ \
|
| 468 |
+
--trained-eval-dir data/trained_eval/ \
|
| 469 |
+
--output results/comparison_table.md
|
| 470 |
+
```
|
| 471 |
+
|
| 472 |
+
The runner is idempotent — `SKIP_BASELINES=1` reuses existing baseline JSONs; `SKIP_TRAINED=1` reuses existing trained-model JSONs.
|
| 473 |
+
|
| 474 |
+
---
|
| 475 |
+
|
| 476 |
## Citations
|
| 477 |
|
| 478 |
```bibtex
|
| 479 |
+
@article{gidney_stim_2021,
|
| 480 |
+
title = {Stim: a fast stabilizer circuit simulator},
|
| 481 |
+
author = {Gidney, Craig},
|
| 482 |
+
journal = {Quantum},
|
| 483 |
+
volume = {5},
|
| 484 |
+
pages = {497},
|
| 485 |
+
year = {2021},
|
| 486 |
+
doi = {10.22331/q-2021-07-06-497},
|
| 487 |
+
note = {arXiv:2103.02202}
|
| 488 |
+
}
|
| 489 |
@article{bausch_alphaqubit_2024,
|
| 490 |
title = {Learning high-accuracy error decoding for quantum processors},
|
| 491 |
author = {Bausch, Johannes and others},
|