plawanrath commited on
Commit
fca04cf
·
verified ·
1 Parent(s): c013642

Scrub README: remove paper-identifying references

Browse files
Files changed (1) hide show
  1. README.md +5 -35
README.md CHANGED
@@ -24,14 +24,6 @@ configs:
24
 
25
  Hand-authored NL→StableHLO pairs across 10 op families (n=30).
26
 
27
- This dataset is one of six NL→MLIR benchmarks released alongside the NeurIPS
28
- 2026 Evaluations & Datasets track paper *Cross-Dialect Generalization Without
29
- Retraining: Benchmarks and Evaluation of Schema-Derived Constrained Decoding
30
- for MLIR* (anonymous submission). The full suite — `MLIR-Spec-150`,
31
- `Linalg-Spec-30`, `StableHLO-Spec-30`, `StableHLO-Held-Out-200`,
32
- `StableHLO-OutOfGrammar-25`, and `MLIR-Functional-Reference-30` — totals 465
33
- instances across three MLIR dialects.
34
-
35
  ## Composition
36
 
37
  - **Instances**: 30
@@ -53,41 +45,19 @@ pass-rate under the dialect's verifier is the primary evaluation metric.
53
 
54
  ## Source format
55
 
56
- For paper reproducibility, individual per-record JSON files (the
57
- `examples/*.json` layout used by the companion code repository) and the
58
- MLCommons Croissant 1.0 metadata (`croissant.json`) ship together with the
59
- release. The JSONL file at `data/test.jsonl` is the canonical HuggingFace
60
- interface; it is generated 1-to-1 from the source records.
61
 
62
  ## Datasheet
63
 
64
- A full Gebru-style datasheet covering motivation, collection, preprocessing,
65
- uses, distribution, and maintenance is included in the companion
66
- reproducibility archive (`docs/datasheets/datasheet.md`). Key points:
67
 
68
  - All reference MLIR programs are verifier-clean at the time of release.
69
- - Hand-authored single-author (no crowdsourcing, no LLM-authored references).
70
  - Test-only — fine-tuning on these benchmarks contaminates future evaluation
71
  and is explicitly out of scope.
72
 
73
- ## Companion artifacts
74
-
75
- - Reproducibility archive (code + scripts): `submission_artifact.tar.gz`
76
- in the OpenReview attachment / Zenodo mirror.
77
- - Companion code repository: <will be populated at camera-ready>.
78
-
79
- ## Citation
80
-
81
- ```
82
- @inproceedings{anonymous2026crossdialect,
83
- title = {Cross-Dialect Generalization Without Retraining: Benchmarks and Evaluation of Schema-Derived Constrained Decoding for MLIR},
84
- author = {Anonymous},
85
- booktitle = {Advances in Neural Information Processing Systems (NeurIPS), Datasets and Benchmarks Track},
86
- year = {2026},
87
- note = {Anonymous submission under review.}
88
- }
89
- ```
90
-
91
  ## License
92
 
93
  Apache-2.0. See `LICENSE`.
 
24
 
25
  Hand-authored NL→StableHLO pairs across 10 op families (n=30).
26
 
 
 
 
 
 
 
 
 
27
  ## Composition
28
 
29
  - **Instances**: 30
 
45
 
46
  ## Source format
47
 
48
+ The JSONL file at `data/test.jsonl` is the canonical HuggingFace interface.
49
+ MLCommons Croissant 1.0 metadata (`croissant.json`) ships alongside the
50
+ release.
 
 
51
 
52
  ## Datasheet
53
 
54
+ Key points (full Gebru-style datasheet ships with the dataset archive):
 
 
55
 
56
  - All reference MLIR programs are verifier-clean at the time of release.
57
+ - Hand-authored (no crowdsourcing, no LLM-authored references).
58
  - Test-only — fine-tuning on these benchmarks contaminates future evaluation
59
  and is explicitly out of scope.
60
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
61
  ## License
62
 
63
  Apache-2.0. See `LICENSE`.