--- license: cc-by-4.0 language: en pretty_name: "QM7b — Quantum-Augmented (QParquet v1.0)" tags: - chemistry - molecular-property - quantum-chemistry - quantum-machine-learning - quantum-kernels - qm7b - benchmark size_categories: - 1K | (N,) per row → (N, N) total | row of `K_q` — the quantum fidelity kernel matrix | | `labels_quantum` | int8 | (N, 1) | `y_q` ∈ {−1, +1} — quantum-derived labels | | `observables_1rdm` | list | (N, 21) | per-sample 1-RDM Pauli ⟨X_j⟩, ⟨Y_j⟩, ⟨Z_j⟩ for j ∈ [0, 7) | | file-level `qparquet_metadata` | JSON (parquet key-value metadata) | — | encoding, n_qubits, backend, full evaluation report, per-property MAE table, shuffled-null z-scores, citations | Validation enforced at write time: `K_q` square, symmetric within atol = 1e-6, diagonal ≈ 1.0 within atol = 1e-3, `input_ids` unique, `observables_1rdm` shape `(N, 3·n_qubits)`. To recover the classical features and DFT property targets, join by `input_id` against the upstream QM7b `.mat` file (`scipy.io.loadmat("qm7b.mat")`); the hashing is deterministic on `features_packed`. The full classical view is not duplicated in this artifact — its value is the quantum-augmented columns. --- ## Loading ```python import numpy as np import pandas as pd import pyarrow.parquet as pq from sklearn.svm import SVC from sklearn.kernel_ridge import KernelRidge # QParquet v1.0 — read the kernel matrix and quantum labels table = pq.read_table("qm7b_quantum.parquet") df = table.to_pandas() K_q = np.vstack(df["kernel_row"].to_numpy()).astype(np.float32) # (N, N) y_q = np.vstack(df["labels_quantum"].to_numpy()).ravel().astype(np.int8) input_ids = df["input_id"].tolist() # File-level qparquet_metadata (encoding, evaluation, citations, …) import json meta = json.loads(table.schema.metadata[b"qparquet_metadata"].decode()) # Train on the quantum-derived label channel clf = SVC(kernel="precomputed", C=1.0).fit(K_q[:300], y_q[:300]) print(clf.score(K_q[300:, :300], y_q[300:])) # Original-task regression: join with upstream QM7b for classical targets # (download qm7b.mat from quantum-machine.org/data/qm7b.mat — link in metadata) ``` The dataset is a drop-in for scikit-learn precomputed-kernel pipelines: load `K_q`, train. No quantum hardware or simulator required at inference time. To produce `K_q` and `y_q` for *new* molecules with the ReLab SDK: ```python import relab # Quantum kernel matrix K_q = relab.kernel(features_scaled, domain="molecular", n_qubits=7) # Quantum-derived labels y_q = relab.fit(features_scaled, domain="molecular", n_qubits=7) ``` --- ## Methodology - **Encoding**: a Heisenberg model on the molecular bond graph (`XX + YY + ZZ` couplings, one qubit per heavy atom). Coulomb off-diagonals `J_ij` map to bond couplings; diagonals `h_i = ½ Z_i^2.4` map to local fields. Reference: [arXiv:2407.14055](https://arxiv.org/abs/2407.14055) (Heisenberg encoding for graph-structured data). - **Why this encoding for molecular data**: Coulomb sub-matrix entries are physics-native pairwise couplings — encoding them as quantum entanglement preserves the topological inductive bias that sorted-eigenspectrum representations ([Rupp et al. 2012](https://arxiv.org/abs/1109.2618)) destroy. Validated on QM7 atomization-energy regression prior to this dataset. - **Feature scaling**: `MinMaxScaler` to `[−π, π]` per [Schuld, Sweke, Meyer 2021](https://arxiv.org/abs/2008.08605) Fourier-bandwidth constraint. - **Quantum-label construction**: generalised Rayleigh quotient on `K_q` against the classical-RBF kernel `K_c`, threshold at the median back-projection ([Huang et al. 2021 §IV](https://arxiv.org/abs/2011.01938)). Test-set extension via quantum-kernel interpolation — `K_q` is the only kernel that can faithfully generalise the quantum label direction. - **Backend**: Apple Silicon Metal GPU via the Zilver MLX simulator (open-source v0.3.2). Statevector-exact at 7 qubits. Cross-verified against a pure-NumPy reference at atol = 1e-4. ### What this kernel is, in plain language The kernel compresses 28-dimensional Coulomb features into a 7-qubit Hilbert space and measures molecular similarity as the fidelity of two Heisenberg-evolved states. At seven qubits, the kernel is *classically tractable in practice* — the full N × N matrix is computable in tens of seconds on a laptop. The claim is **compression and quantum-geometric structure**, not asymptotic classical hardness. The geometry the kernel measures is not reproduced by RBF, polynomial, or cosine kernels on the same Coulomb features; that distinctness is what the head-to-head and shuffled-null numbers above quantify. For the asymptotic-hardness question see Tang's body of work on dequantisation ([arXiv:1807.04271](https://arxiv.org/abs/1807.04271); [arXiv:1910.06151](https://arxiv.org/abs/1910.06151)) and the QSVT framework ([Gilyén, Su, Low, Wiebe 2019](https://arxiv.org/abs/1806.01838)). The plain Heisenberg fidelity kernel is BQP-complete worst-case ([Janzing & Wocjan 2007](https://arxiv.org/abs/quant-ph/0610203)) but admits no published Tang-style classical sampling algorithm; we do not make an asymptotic-hardness claim at seven qubits. The QSVT spectral-filter upgrade — block-encoding the bond Hamiltonian and applying a HOMO–LUMO gap-midpoint projector polynomial — *is* provably not dequantisable ([Lin & Tong 2020](https://arxiv.org/abs/2002.12508); [Martyn et al. 2021](https://arxiv.org/abs/2105.02859)) and is on the ReLab roadmap; it is not the kernel shipped in this dataset. --- ## Reproduction Headline run: - N_train = 300, N_test = 100, stratified across HOMO–LUMO-gap quartiles - RNG seed = 42 - Backend: Zilver MLX simulator on Apple Silicon Metal GPU, max_qubits = 25 - `K_q` computed in 17.5 s; per-property KRR sweep in sub-second --- ## Citation ```bibtex @dataset{relab_qm7b_quantum_2026, title = {QM7b — Quantum-Augmented (QParquet v1.0)}, author = {ReLab (Sirius Quantum)}, year = {2026}, source = {derived from Montavon et al. 2013, arXiv:1305.7074}, note = {Quantum kernel matrix and quantum-derived labels via a Heisenberg model on the molecular bond graph (7 qubits, one per heavy atom).} } ``` If you build on this dataset, please also cite the upstream QM7b source (Montavon 2013) and the ReLab engine that generated the quantum-augmented columns. ## References - Montavon, Rupp, Gobre, Vazquez-Mayagoitia, Hansen, Tkatchenko, Müller, von Lilienfeld 2013 — [arXiv:1305.7074](https://arxiv.org/abs/1305.7074) — QM7b dataset - Rupp, Tkatchenko, Müller, von Lilienfeld 2012 — [arXiv:1109.2618](https://arxiv.org/abs/1109.2618) — Coulomb-matrix representation - Huang et al. 2021 — [arXiv:2011.01938](https://arxiv.org/abs/2011.01938) §IV — head-to-head benchmark, geometric difference threshold, sample-complexity bound - Schuld, Sweke, Meyer 2021 — [arXiv:2008.08605](https://arxiv.org/abs/2008.08605) — Fourier-bandwidth scaling - Schuld 2024 — [arXiv:2403.07059](https://arxiv.org/abs/2403.07059) — geometric advantage `g(K_Q, K_C)` - Zhao et al. 2026 — [arXiv:2604.07639](https://arxiv.org/abs/2604.07639) — compression-match framework - Gilyén, Su, Low, Wiebe 2019 — [arXiv:1806.01838](https://arxiv.org/abs/1806.01838) — QSVT - Janzing & Wocjan 2007 — [arXiv:quant-ph/0610203](https://arxiv.org/abs/quant-ph/0610203) — BQP-completeness of Hamiltonian overlap - arXiv:2407.14055 — graph-Hamiltonian encoding for structured data