plawanrath commited on
Commit
c6b4e11
·
verified ·
1 Parent(s): c3e5dfe

Scrub README: remove paper-identifying references

Browse files
Files changed (1) hide show
  1. README.md +5 -35
README.md CHANGED
@@ -27,14 +27,6 @@ configs:
27
 
28
  Hand-authored functional-correctness reference set for arith, linalg+memref, and stablehlo (n=30, 10 per dialect).
29
 
30
- This dataset is one of six NL→MLIR benchmarks released alongside the NeurIPS
31
- 2026 Evaluations & Datasets track paper *Cross-Dialect Generalization Without
32
- Retraining: Benchmarks and Evaluation of Schema-Derived Constrained Decoding
33
- for MLIR* (anonymous submission). The full suite — `MLIR-Spec-150`,
34
- `Linalg-Spec-30`, `StableHLO-Spec-30`, `StableHLO-Held-Out-200`,
35
- `StableHLO-OutOfGrammar-25`, and `MLIR-Functional-Reference-30` — totals 465
36
- instances across three MLIR dialects.
37
-
38
  ## Composition
39
 
40
  - **Instances**: 30
@@ -56,41 +48,19 @@ pass-rate under the dialect's verifier is the primary evaluation metric.
56
 
57
  ## Source format
58
 
59
- For paper reproducibility, individual per-record JSON files (the
60
- `examples/*.json` layout used by the companion code repository) and the
61
- MLCommons Croissant 1.0 metadata (`croissant.json`) ship together with the
62
- release. The JSONL file at `data/test.jsonl` is the canonical HuggingFace
63
- interface; it is generated 1-to-1 from the source records.
64
 
65
  ## Datasheet
66
 
67
- A full Gebru-style datasheet covering motivation, collection, preprocessing,
68
- uses, distribution, and maintenance is included in the companion
69
- reproducibility archive (`docs/datasheets/datasheet.md`). Key points:
70
 
71
  - All reference MLIR programs are verifier-clean at the time of release.
72
- - Hand-authored single-author (no crowdsourcing, no LLM-authored references).
73
  - Test-only — fine-tuning on these benchmarks contaminates future evaluation
74
  and is explicitly out of scope.
75
 
76
- ## Companion artifacts
77
-
78
- - Reproducibility archive (code + scripts): `submission_artifact.tar.gz`
79
- in the OpenReview attachment / Zenodo mirror.
80
- - Companion code repository: <will be populated at camera-ready>.
81
-
82
- ## Citation
83
-
84
- ```
85
- @inproceedings{anonymous2026crossdialect,
86
- title = {Cross-Dialect Generalization Without Retraining: Benchmarks and Evaluation of Schema-Derived Constrained Decoding for MLIR},
87
- author = {Anonymous},
88
- booktitle = {Advances in Neural Information Processing Systems (NeurIPS), Datasets and Benchmarks Track},
89
- year = {2026},
90
- note = {Anonymous submission under review.}
91
- }
92
- ```
93
-
94
  ## License
95
 
96
  Apache-2.0. See `LICENSE`.
 
27
 
28
  Hand-authored functional-correctness reference set for arith, linalg+memref, and stablehlo (n=30, 10 per dialect).
29
 
 
 
 
 
 
 
 
 
30
  ## Composition
31
 
32
  - **Instances**: 30
 
48
 
49
  ## Source format
50
 
51
+ The JSONL file at `data/test.jsonl` is the canonical HuggingFace interface.
52
+ MLCommons Croissant 1.0 metadata (`croissant.json`) ships alongside the
53
+ release.
 
 
54
 
55
  ## Datasheet
56
 
57
+ Key points (full Gebru-style datasheet ships with the dataset archive):
 
 
58
 
59
  - All reference MLIR programs are verifier-clean at the time of release.
60
+ - Hand-authored (no crowdsourcing, no LLM-authored references).
61
  - Test-only — fine-tuning on these benchmarks contaminates future evaluation
62
  and is explicitly out of scope.
63
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
64
  ## License
65
 
66
  Apache-2.0. See `LICENSE`.