Scrub README: remove paper-identifying references
Browse files
README.md
CHANGED
|
@@ -26,14 +26,6 @@ configs:
|
|
| 26 |
|
| 27 |
Hand-authored NL→StableHLO pairs using ops outside our 10-op grammar scope (n=25, stress set).
|
| 28 |
|
| 29 |
-
This dataset is one of six NL→MLIR benchmarks released alongside the NeurIPS
|
| 30 |
-
2026 Evaluations & Datasets track paper *Cross-Dialect Generalization Without
|
| 31 |
-
Retraining: Benchmarks and Evaluation of Schema-Derived Constrained Decoding
|
| 32 |
-
for MLIR* (anonymous submission). The full suite — `MLIR-Spec-150`,
|
| 33 |
-
`Linalg-Spec-30`, `StableHLO-Spec-30`, `StableHLO-Held-Out-200`,
|
| 34 |
-
`StableHLO-OutOfGrammar-25`, and `MLIR-Functional-Reference-30` — totals 465
|
| 35 |
-
instances across three MLIR dialects.
|
| 36 |
-
|
| 37 |
## Composition
|
| 38 |
|
| 39 |
- **Instances**: 25
|
|
@@ -55,41 +47,19 @@ pass-rate under the dialect's verifier is the primary evaluation metric.
|
|
| 55 |
|
| 56 |
## Source format
|
| 57 |
|
| 58 |
-
|
| 59 |
-
`
|
| 60 |
-
|
| 61 |
-
release. The JSONL file at `data/test.jsonl` is the canonical HuggingFace
|
| 62 |
-
interface; it is generated 1-to-1 from the source records.
|
| 63 |
|
| 64 |
## Datasheet
|
| 65 |
|
| 66 |
-
|
| 67 |
-
uses, distribution, and maintenance is included in the companion
|
| 68 |
-
reproducibility archive (`docs/datasheets/datasheet.md`). Key points:
|
| 69 |
|
| 70 |
- All reference MLIR programs are verifier-clean at the time of release.
|
| 71 |
-
- Hand-authored
|
| 72 |
- Test-only — fine-tuning on these benchmarks contaminates future evaluation
|
| 73 |
and is explicitly out of scope.
|
| 74 |
|
| 75 |
-
## Companion artifacts
|
| 76 |
-
|
| 77 |
-
- Reproducibility archive (code + scripts): `submission_artifact.tar.gz`
|
| 78 |
-
in the OpenReview attachment / Zenodo mirror.
|
| 79 |
-
- Companion code repository: <will be populated at camera-ready>.
|
| 80 |
-
|
| 81 |
-
## Citation
|
| 82 |
-
|
| 83 |
-
```
|
| 84 |
-
@inproceedings{anonymous2026crossdialect,
|
| 85 |
-
title = {Cross-Dialect Generalization Without Retraining: Benchmarks and Evaluation of Schema-Derived Constrained Decoding for MLIR},
|
| 86 |
-
author = {Anonymous},
|
| 87 |
-
booktitle = {Advances in Neural Information Processing Systems (NeurIPS), Datasets and Benchmarks Track},
|
| 88 |
-
year = {2026},
|
| 89 |
-
note = {Anonymous submission under review.}
|
| 90 |
-
}
|
| 91 |
-
```
|
| 92 |
-
|
| 93 |
## License
|
| 94 |
|
| 95 |
Apache-2.0. See `LICENSE`.
|
|
|
|
| 26 |
|
| 27 |
Hand-authored NL→StableHLO pairs using ops outside our 10-op grammar scope (n=25, stress set).
|
| 28 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 29 |
## Composition
|
| 30 |
|
| 31 |
- **Instances**: 25
|
|
|
|
| 47 |
|
| 48 |
## Source format
|
| 49 |
|
| 50 |
+
The JSONL file at `data/test.jsonl` is the canonical HuggingFace interface.
|
| 51 |
+
MLCommons Croissant 1.0 metadata (`croissant.json`) ships alongside the
|
| 52 |
+
release.
|
|
|
|
|
|
|
| 53 |
|
| 54 |
## Datasheet
|
| 55 |
|
| 56 |
+
Key points (full Gebru-style datasheet ships with the dataset archive):
|
|
|
|
|
|
|
| 57 |
|
| 58 |
- All reference MLIR programs are verifier-clean at the time of release.
|
| 59 |
+
- Hand-authored (no crowdsourcing, no LLM-authored references).
|
| 60 |
- Test-only — fine-tuning on these benchmarks contaminates future evaluation
|
| 61 |
and is explicitly out of scope.
|
| 62 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 63 |
## License
|
| 64 |
|
| 65 |
Apache-2.0. See `LICENSE`.
|