plawanrath commited on
Commit
7bd6633
·
verified ·
1 Parent(s): 615eced

Initial upload of NL→MLIR benchmark

Browse files
Files changed (4) hide show
  1. LICENSE +19 -0
  2. README.md +94 -0
  3. croissant.json +110 -0
  4. data/test.jsonl +30 -0
LICENSE ADDED
@@ -0,0 +1,19 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Apache License
2
+ Version 2.0, January 2004
3
+ http://www.apache.org/licenses/
4
+
5
+ Licensed under the Apache License, Version 2.0 (the "License");
6
+ you may not use this file except in compliance with the License.
7
+ You may obtain a copy of the License at
8
+
9
+ http://www.apache.org/licenses/LICENSE-2.0
10
+
11
+ Unless required by applicable law or agreed to in writing, software
12
+ distributed under the License is distributed on an "AS IS" BASIS,
13
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
14
+ See the License for the specific language governing permissions and
15
+ limitations under the License.
16
+
17
+ See the full Apache-2.0 text at https://www.apache.org/licenses/LICENSE-2.0.txt
18
+
19
+ Copyright (c) 2026 Anonymous (double-blind submission, NeurIPS 2026 E&D Track).
README.md ADDED
@@ -0,0 +1,94 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ language:
4
+ - en
5
+ pretty_name: StableHLO-Spec-30
6
+ size_categories:
7
+ - n<1K
8
+ task_categories:
9
+ - text-generation
10
+ - text2text-generation
11
+ tags:
12
+ - mlir
13
+ - code-generation
14
+ - compiler
15
+ - constrained-decoding
16
+ - stablehlo
17
+ configs:
18
+ - config_name: default
19
+ data_files:
20
+ - split: test
21
+ path: data/test.jsonl
22
+ ---
23
+
24
+ # StableHLO-Spec-30
25
+
26
+ Hand-authored NL→StableHLO pairs across 10 op families (n=30).
27
+
28
+ This dataset is one of six NL→MLIR benchmarks released alongside the NeurIPS
29
+ 2026 Evaluations & Datasets track paper *Cross-Dialect Generalization Without
30
+ Retraining: Benchmarks and Evaluation of Schema-Derived Constrained Decoding
31
+ for MLIR* (anonymous submission). The full suite — `MLIR-Spec-150`,
32
+ `Linalg-Spec-30`, `StableHLO-Spec-30`, `StableHLO-Held-Out-200`,
33
+ `StableHLO-OutOfGrammar-25`, and `MLIR-Functional-Reference-30` — totals 465
34
+ instances across three MLIR dialects.
35
+
36
+ ## Composition
37
+
38
+ - **Instances**: 30
39
+ - **Format**: one JSON record per line in `data/test.jsonl`
40
+ - **Schema**: fields = `dialect`, `difficulty`, `id`, `mlir`, `nl`, `notes`
41
+ - **Verifier**: `stablehlo-opt v1.4.0` (upstream truth) and `iree-compile --compile-to=input` (substitute, 50/50 concordant on a stratified n=50 sample)
42
+ - **License**: Apache-2.0 (SPDX: Apache-2.0). No third-party IP restrictions.
43
+
44
+ ## Loading
45
+
46
+ ```python
47
+ from datasets import load_dataset
48
+ ds = load_dataset("plawanrath/StableHLO-Spec-30", split="test")
49
+ print(ds[0])
50
+ ```
51
+
52
+ Each record is a self-contained natural-language→MLIR pair; verify-valid
53
+ pass-rate under the dialect's verifier is the primary evaluation metric.
54
+
55
+ ## Source format
56
+
57
+ For paper reproducibility, individual per-record JSON files (the
58
+ `examples/*.json` layout used by the companion code repository) and the
59
+ MLCommons Croissant 1.0 metadata (`croissant.json`) ship together with the
60
+ release. The JSONL file at `data/test.jsonl` is the canonical HuggingFace
61
+ interface; it is generated 1-to-1 from the source records.
62
+
63
+ ## Datasheet
64
+
65
+ A full Gebru-style datasheet covering motivation, collection, preprocessing,
66
+ uses, distribution, and maintenance is included in the companion
67
+ reproducibility archive (`docs/datasheets/datasheet.md`). Key points:
68
+
69
+ - All reference MLIR programs are verifier-clean at the time of release.
70
+ - Hand-authored single-author (no crowdsourcing, no LLM-authored references).
71
+ - Test-only — fine-tuning on these benchmarks contaminates future evaluation
72
+ and is explicitly out of scope.
73
+
74
+ ## Companion artifacts
75
+
76
+ - Reproducibility archive (code + scripts): `submission_artifact.tar.gz`
77
+ in the OpenReview attachment / Zenodo mirror.
78
+ - Companion code repository: <will be populated at camera-ready>.
79
+
80
+ ## Citation
81
+
82
+ ```
83
+ @inproceedings{anonymous2026crossdialect,
84
+ title = {Cross-Dialect Generalization Without Retraining: Benchmarks and Evaluation of Schema-Derived Constrained Decoding for MLIR},
85
+ author = {Anonymous},
86
+ booktitle = {Advances in Neural Information Processing Systems (NeurIPS), Datasets and Benchmarks Track},
87
+ year = {2026},
88
+ note = {Anonymous submission under review.}
89
+ }
90
+ ```
91
+
92
+ ## License
93
+
94
+ Apache-2.0. See `LICENSE`.
croissant.json ADDED
@@ -0,0 +1,110 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "@context": {
3
+ "@language": "en",
4
+ "@vocab": "https://schema.org/",
5
+ "sc": "https://schema.org/",
6
+ "cr": "http://mlcommons.org/croissant/",
7
+ "rai": "http://mlcommons.org/croissant/RAI/",
8
+ "dct": "http://purl.org/dc/terms/",
9
+ "data": {
10
+ "@id": "cr:data",
11
+ "@type": "@json"
12
+ },
13
+ "dataType": {
14
+ "@id": "cr:dataType",
15
+ "@type": "@vocab"
16
+ },
17
+ "examples": {
18
+ "@id": "cr:examples",
19
+ "@type": "@json"
20
+ }
21
+ },
22
+ "@type": "sc:Dataset",
23
+ "name": "StableHLO-Spec-30",
24
+ "description": "Hand-authored NL\u2192MLIR pairs for StableHLO dialect covering 10 op families.",
25
+ "conformsTo": "http://mlcommons.org/croissant/1.0",
26
+ "license": "https://spdx.org/licenses/Apache-2.0.html",
27
+ "version": "1.0.0",
28
+ "datePublished": "2026-04-21",
29
+ "citeAs": "(anonymous submission to NeurIPS 2026 E&D track)",
30
+ "url": "<populated-at-camera-ready>",
31
+ "distribution": [
32
+ {
33
+ "@type": "cr:FileObject",
34
+ "@id": "StableHLO-Spec-30-archive",
35
+ "name": "StableHLO-Spec-30.zip",
36
+ "contentUrl": "<populated-at-camera-ready>",
37
+ "encodingFormat": "application/zip",
38
+ "sha256": "<populated-at-camera-ready>"
39
+ }
40
+ ],
41
+ "recordSet": [
42
+ {
43
+ "@type": "cr:RecordSet",
44
+ "@id": "records",
45
+ "name": "records",
46
+ "description": "One MLIR prompt/reference pair per record.",
47
+ "field": [
48
+ {
49
+ "@type": "cr:Field",
50
+ "@id": "records/id",
51
+ "name": "id",
52
+ "dataType": "sc:Text",
53
+ "description": "Unique record identifier."
54
+ },
55
+ {
56
+ "@type": "cr:Field",
57
+ "@id": "records/nl",
58
+ "name": "nl",
59
+ "dataType": "sc:Text",
60
+ "description": "Natural-language description."
61
+ },
62
+ {
63
+ "@type": "cr:Field",
64
+ "@id": "records/mlir",
65
+ "name": "mlir",
66
+ "dataType": "sc:Text",
67
+ "description": "Reference MLIR that verifies under mlir-opt/iree-compile."
68
+ },
69
+ {
70
+ "@type": "cr:Field",
71
+ "@id": "records/dialect",
72
+ "name": "dialect",
73
+ "dataType": "sc:Text",
74
+ "description": "MLIR dialect of the reference program."
75
+ },
76
+ {
77
+ "@type": "cr:Field",
78
+ "@id": "records/difficulty",
79
+ "name": "difficulty",
80
+ "dataType": "sc:Text",
81
+ "description": "Author-assigned difficulty or 'programmatic'."
82
+ }
83
+ ]
84
+ }
85
+ ],
86
+ "rai:dataCollection": "Hand-authored by the submitting author against the target MLIR dialect's ODS.",
87
+ "rai:dataBiases": [
88
+ "Author-curated: prompts reflect the submitting author's mental model of the target dialect; may under-represent op combinations not present in the spec examples.",
89
+ "No human-subject data; no PII; no demographic bias dimensions apply."
90
+ ],
91
+ "rai:dataLimitations": [
92
+ "Verify-valid pass-rate measures structural validity under mlir-opt/iree-compile, not functional correctness. Programs that pass the gate may still compute the wrong function.",
93
+ "English natural-language descriptions only.",
94
+ "Small n (30-200 prompts per dataset) yields CI half-widths of ~3-10pp at p=0.5."
95
+ ],
96
+ "rai:annotationsPerExample": 0,
97
+ "rai:annotationDemographics": "N/A \u2014 no human annotators.",
98
+ "rai:personalSensitiveInformation": "None.",
99
+ "rai:useCases": [
100
+ "Evaluating NL\u2192MLIR generation systems (constrained or unconstrained) under a verifier-based pass-rate metric."
101
+ ],
102
+ "rai:excludedUseCases": [
103
+ "Evaluating functional correctness without an additional lowering + execution harness.",
104
+ "Training or fine-tuning production code-generation models without a separate held-out corpus."
105
+ ],
106
+ "extra": {
107
+ "size": 30,
108
+ "sampling": "Author-curated (single author), 10 op families."
109
+ }
110
+ }
data/test.jsonl ADDED
@@ -0,0 +1,30 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {"id": "01_add-1d", "difficulty": "easy", "nl": "Write a function that adds two 1-D f32 tensors of 16 elements using stablehlo.add.", "mlir": "module {\n func.func @a(%a: tensor<16xf32>, %b: tensor<16xf32>) -> tensor<16xf32> {\n %0 = stablehlo.add %a, %b : tensor<16xf32>\n return %0 : tensor<16xf32>\n }\n}", "notes": "canonical stablehlo.add", "dialect": "stablehlo+func"}
2
+ {"id": "02_add-2d-dynamic", "difficulty": "easy", "nl": "Write a function that adds two 2-D f32 tensors with dynamic shapes and returns the result.", "mlir": "module {\n func.func @add2d(%a: tensor<?x?xf32>, %b: tensor<?x?xf32>) -> tensor<?x?xf32> {\n %0 = stablehlo.add %a, %b : tensor<?x?xf32>\n return %0 : tensor<?x?xf32>\n }\n}", "notes": "dynamic-shape addition", "dialect": "stablehlo+func"}
3
+ {"id": "03_subtract-1d-i32", "difficulty": "easy", "nl": "Write a function that subtracts two 1-D i32 tensors elementwise.", "mlir": "module {\n func.func @sub(%a: tensor<8xi32>, %b: tensor<8xi32>) -> tensor<8xi32> {\n %0 = stablehlo.subtract %a, %b : tensor<8xi32>\n return %0 : tensor<8xi32>\n }\n}", "notes": "integer subtraction", "dialect": "stablehlo+func"}
4
+ {"id": "04_multiply-2d", "difficulty": "easy", "nl": "Write a function that multiplies two 4x4 f32 tensors elementwise using stablehlo.multiply.", "mlir": "module {\n func.func @mul(%a: tensor<4x4xf32>, %b: tensor<4x4xf32>) -> tensor<4x4xf32> {\n %0 = stablehlo.multiply %a, %b : tensor<4x4xf32>\n return %0 : tensor<4x4xf32>\n }\n}", "notes": "static 4x4 multiply", "dialect": "stablehlo+func"}
5
+ {"id": "05_divide-f64", "difficulty": "easy", "nl": "Write a function that divides two 1-D f64 tensors of 32 elements using stablehlo.divide.", "mlir": "module {\n func.func @div(%a: tensor<32xf64>, %b: tensor<32xf64>) -> tensor<32xf64> {\n %0 = stablehlo.divide %a, %b : tensor<32xf64>\n return %0 : tensor<32xf64>\n }\n}", "notes": "f64 division", "dialect": "stablehlo+func"}
6
+ {"id": "06_abs-f32", "difficulty": "easy", "nl": "Write a function that computes the elementwise absolute value of a 1-D f32 tensor.", "mlir": "module {\n func.func @ab(%a: tensor<16xf32>) -> tensor<16xf32> {\n %0 = stablehlo.abs %a : tensor<16xf32>\n return %0 : tensor<16xf32>\n }\n}", "notes": "abs", "dialect": "stablehlo+func"}
7
+ {"id": "07_exp-1d", "difficulty": "easy", "nl": "Write a function that computes the elementwise exponential of a 1-D f32 tensor of 10 elements.", "mlir": "module {\n func.func @ex(%a: tensor<10xf32>) -> tensor<10xf32> {\n %0 = stablehlo.exponential %a : tensor<10xf32>\n return %0 : tensor<10xf32>\n }\n}", "notes": "exp", "dialect": "stablehlo+func"}
8
+ {"id": "08_abs-dynamic", "difficulty": "medium", "nl": "Write a function that computes the elementwise absolute value of a dynamic-shape 2-D f32 tensor.", "mlir": "module {\n func.func @abd(%a: tensor<?x?xf32>) -> tensor<?x?xf32> {\n %0 = stablehlo.abs %a : tensor<?x?xf32>\n return %0 : tensor<?x?xf32>\n }\n}", "notes": "dynamic abs", "dialect": "stablehlo+func"}
9
+ {"id": "09_transpose-2d", "difficulty": "medium", "nl": "Write a function that transposes a 4x8 f32 tensor producing an 8x4 tensor.", "mlir": "module {\n func.func @t(%a: tensor<4x8xf32>) -> tensor<8x4xf32> {\n %0 = stablehlo.transpose %a, dims = [1, 0] : (tensor<4x8xf32>) -> tensor<8x4xf32>\n return %0 : tensor<8x4xf32>\n }\n}", "notes": "transpose 2D", "dialect": "stablehlo+func"}
10
+ {"id": "10_transpose-3d", "difficulty": "medium", "nl": "Write a function that transposes a 2x3x4 f32 tensor with permutation [2, 0, 1] producing a 4x2x3 tensor.", "mlir": "module {\n func.func @t3(%a: tensor<2x3x4xf32>) -> tensor<4x2x3xf32> {\n %0 = stablehlo.transpose %a, dims = [2, 0, 1] : (tensor<2x3x4xf32>) -> tensor<4x2x3xf32>\n return %0 : tensor<4x2x3xf32>\n }\n}", "notes": "3D transpose", "dialect": "stablehlo+func"}
11
+ {"id": "11_transpose-square", "difficulty": "easy", "nl": "Write a function that transposes a 3x3 f32 tensor.", "mlir": "module {\n func.func @t(%a: tensor<3x3xf32>) -> tensor<3x3xf32> {\n %0 = stablehlo.transpose %a, dims = [1, 0] : (tensor<3x3xf32>) -> tensor<3x3xf32>\n return %0 : tensor<3x3xf32>\n }\n}", "notes": "square transpose", "dialect": "stablehlo+func"}
12
+ {"id": "12_broadcast-1d-to-2d", "difficulty": "medium", "nl": "Write a function that broadcasts a 1-D f32 tensor of 8 elements to a 4x8 2-D tensor along dimension 1.", "mlir": "module {\n func.func @b(%a: tensor<8xf32>) -> tensor<4x8xf32> {\n %0 = stablehlo.broadcast_in_dim %a, dims = [1] : (tensor<8xf32>) -> tensor<4x8xf32>\n return %0 : tensor<4x8xf32>\n }\n}", "notes": "broadcast 1D to 2D", "dialect": "stablehlo+func"}
13
+ {"id": "13_broadcast-scalar-to-vector", "difficulty": "medium", "nl": "Write a function that broadcasts a scalar f32 (shape [1]) to a 1-D f32 tensor of 16 elements.", "mlir": "module {\n func.func @bs(%a: tensor<1xf32>) -> tensor<16xf32> {\n %0 = stablehlo.broadcast_in_dim %a, dims = [0] : (tensor<1xf32>) -> tensor<16xf32>\n return %0 : tensor<16xf32>\n }\n}", "notes": "scalar broadcast", "dialect": "stablehlo+func"}
14
+ {"id": "14_reshape-flatten", "difficulty": "medium", "nl": "Write a function that flattens a 4x8 f32 tensor into a 1-D tensor of 32 elements.", "mlir": "module {\n func.func @r(%a: tensor<4x8xf32>) -> tensor<32xf32> {\n %0 = stablehlo.reshape %a : (tensor<4x8xf32>) -> tensor<32xf32>\n return %0 : tensor<32xf32>\n }\n}", "notes": "flatten", "dialect": "stablehlo+func"}
15
+ {"id": "15_reshape-2d-to-3d", "difficulty": "medium", "nl": "Write a function that reshapes a 12x8 f32 tensor into a 3x4x8 3-D tensor.", "mlir": "module {\n func.func @r(%a: tensor<12x8xf32>) -> tensor<3x4x8xf32> {\n %0 = stablehlo.reshape %a : (tensor<12x8xf32>) -> tensor<3x4x8xf32>\n return %0 : tensor<3x4x8xf32>\n }\n}", "notes": "2D to 3D reshape", "dialect": "stablehlo+func"}
16
+ {"id": "16_reshape-transpose-chain", "difficulty": "hard", "nl": "Write a function that flattens a 4x8 f32 tensor, then transposes the result — no wait, simpler: reshape a 4x8 tensor into 8x4.", "mlir": "module {\n func.func @r(%a: tensor<4x8xf32>) -> tensor<8x4xf32> {\n %0 = stablehlo.reshape %a : (tensor<4x8xf32>) -> tensor<8x4xf32>\n return %0 : tensor<8x4xf32>\n }\n}", "notes": "reshape shape change", "dialect": "stablehlo+func"}
17
+ {"id": "17_dot_general-matmul", "difficulty": "medium", "nl": "Write a function that performs a matrix multiplication of a 4x8 f32 tensor and an 8x16 f32 tensor using stablehlo.dot_general.", "mlir": "module {\n func.func @m(%a: tensor<4x8xf32>, %b: tensor<8x16xf32>) -> tensor<4x16xf32> {\n %0 = stablehlo.dot_general %a, %b, contracting_dims = [1] x [0] : (tensor<4x8xf32>, tensor<8x16xf32>) -> tensor<4x16xf32>\n return %0 : tensor<4x16xf32>\n }\n}", "notes": "canonical matmul", "dialect": "stablehlo+func"}
18
+ {"id": "18_dot_general-square", "difficulty": "medium", "nl": "Write a function that multiplies two 8x8 f32 tensors using stablehlo.dot_general.", "mlir": "module {\n func.func @m(%a: tensor<8x8xf32>, %b: tensor<8x8xf32>) -> tensor<8x8xf32> {\n %0 = stablehlo.dot_general %a, %b, contracting_dims = [1] x [0] : (tensor<8x8xf32>, tensor<8x8xf32>) -> tensor<8x8xf32>\n return %0 : tensor<8x8xf32>\n }\n}", "notes": "square matmul", "dialect": "stablehlo+func"}
19
+ {"id": "19_dot_general-tall-thin", "difficulty": "medium", "nl": "Multiply a 128x16 f32 tensor by a 16x4 f32 tensor using stablehlo.dot_general.", "mlir": "module {\n func.func @m(%a: tensor<128x16xf32>, %b: tensor<16x4xf32>) -> tensor<128x4xf32> {\n %0 = stablehlo.dot_general %a, %b, contracting_dims = [1] x [0] : (tensor<128x16xf32>, tensor<16x4xf32>) -> tensor<128x4xf32>\n return %0 : tensor<128x4xf32>\n }\n}", "notes": "tall-thin matmul", "dialect": "stablehlo+func"}
20
+ {"id": "20_add-multiply-chain", "difficulty": "medium", "nl": "Write a function that adds two 1-D f32 tensors and then multiplies the sum by the first input.", "mlir": "module {\n func.func @c(%a: tensor<16xf32>, %b: tensor<16xf32>) -> tensor<16xf32> {\n %0 = stablehlo.add %a, %b : tensor<16xf32>\n %1 = stablehlo.multiply %0, %a : tensor<16xf32>\n return %1 : tensor<16xf32>\n }\n}", "notes": "add-then-multiply", "dialect": "stablehlo+func"}
21
+ {"id": "21_abs-exp-chain", "difficulty": "medium", "nl": "Write a function that computes the exponential of the absolute value of a 1-D f32 tensor.", "mlir": "module {\n func.func @c(%a: tensor<16xf32>) -> tensor<16xf32> {\n %0 = stablehlo.abs %a : tensor<16xf32>\n %1 = stablehlo.exponential %0 : tensor<16xf32>\n return %1 : tensor<16xf32>\n }\n}", "notes": "abs then exp", "dialect": "stablehlo+func"}
22
+ {"id": "22_matmul-add-bias", "difficulty": "hard", "nl": "Matrix-multiply a 4x8 f32 tensor by an 8x16 f32 tensor, then add a 4x16 bias tensor.", "mlir": "module {\n func.func @lin(%a: tensor<4x8xf32>, %b: tensor<8x16xf32>, %bias: tensor<4x16xf32>) -> tensor<4x16xf32> {\n %0 = stablehlo.dot_general %a, %b, contracting_dims = [1] x [0] : (tensor<4x8xf32>, tensor<8x16xf32>) -> tensor<4x16xf32>\n %1 = stablehlo.add %0, %bias : tensor<4x16xf32>\n return %1 : tensor<4x16xf32>\n }\n}", "notes": "linear layer", "dialect": "stablehlo+func"}
23
+ {"id": "23_transpose-matmul", "difficulty": "hard", "nl": "Transpose a 8x4 f32 tensor, then matrix-multiply the result with a 4x16 f32 tensor.", "mlir": "module {\n func.func @tm(%a: tensor<8x4xf32>, %b: tensor<8x16xf32>) -> tensor<4x16xf32> {\n %0 = stablehlo.transpose %a, dims = [1, 0] : (tensor<8x4xf32>) -> tensor<4x8xf32>\n %1 = stablehlo.dot_general %0, %b, contracting_dims = [1] x [0] : (tensor<4x8xf32>, tensor<8x16xf32>) -> tensor<4x16xf32>\n return %1 : tensor<4x16xf32>\n }\n}", "notes": "transpose+matmul", "dialect": "stablehlo+func"}
24
+ {"id": "24_reshape-add", "difficulty": "medium", "nl": "Reshape a 4x4 f32 tensor into a 16-element 1-D tensor, then add to an existing 16-element tensor.", "mlir": "module {\n func.func @ra(%a: tensor<4x4xf32>, %b: tensor<16xf32>) -> tensor<16xf32> {\n %0 = stablehlo.reshape %a : (tensor<4x4xf32>) -> tensor<16xf32>\n %1 = stablehlo.add %0, %b : tensor<16xf32>\n return %1 : tensor<16xf32>\n }\n}", "notes": "reshape+add", "dialect": "stablehlo+func"}
25
+ {"id": "25_broadcast-multiply", "difficulty": "hard", "nl": "Broadcast a length-8 1-D f32 tensor to a 4x8 tensor, then multiply with an existing 4x8 tensor.", "mlir": "module {\n func.func @bm(%a: tensor<8xf32>, %b: tensor<4x8xf32>) -> tensor<4x8xf32> {\n %0 = stablehlo.broadcast_in_dim %a, dims = [1] : (tensor<8xf32>) -> tensor<4x8xf32>\n %1 = stablehlo.multiply %0, %b : tensor<4x8xf32>\n return %1 : tensor<4x8xf32>\n }\n}", "notes": "broadcast+multiply", "dialect": "stablehlo+func"}
26
+ {"id": "26_add-3d", "difficulty": "easy", "nl": "Write a function that adds two 2x3x4 f32 tensors elementwise.", "mlir": "module {\n func.func @a3(%a: tensor<2x3x4xf32>, %b: tensor<2x3x4xf32>) -> tensor<2x3x4xf32> {\n %0 = stablehlo.add %a, %b : tensor<2x3x4xf32>\n return %0 : tensor<2x3x4xf32>\n }\n}", "notes": "3D add", "dialect": "stablehlo+func"}
27
+ {"id": "27_subtract-bf16", "difficulty": "easy", "nl": "Write a function that subtracts two 16-element bf16 tensors elementwise.", "mlir": "module {\n func.func @s(%a: tensor<16xbf16>, %b: tensor<16xbf16>) -> tensor<16xbf16> {\n %0 = stablehlo.subtract %a, %b : tensor<16xbf16>\n return %0 : tensor<16xbf16>\n }\n}", "notes": "bf16 arithmetic", "dialect": "stablehlo+func"}
28
+ {"id": "28_dot_general-f16", "difficulty": "medium", "nl": "Multiply two 16x16 f16 tensors using stablehlo.dot_general.", "mlir": "module {\n func.func @m(%a: tensor<16x16xf16>, %b: tensor<16x16xf16>) -> tensor<16x16xf16> {\n %0 = stablehlo.dot_general %a, %b, contracting_dims = [1] x [0] : (tensor<16x16xf16>, tensor<16x16xf16>) -> tensor<16x16xf16>\n return %0 : tensor<16x16xf16>\n }\n}", "notes": "f16 matmul", "dialect": "stablehlo+func"}
29
+ {"id": "29_add-multiply-abs-chain", "difficulty": "hard", "nl": "Write a function that computes the absolute value of (a + b) * a for two 1-D f32 tensors.", "mlir": "module {\n func.func @c(%a: tensor<16xf32>, %b: tensor<16xf32>) -> tensor<16xf32> {\n %0 = stablehlo.add %a, %b : tensor<16xf32>\n %1 = stablehlo.multiply %0, %a : tensor<16xf32>\n %2 = stablehlo.abs %1 : tensor<16xf32>\n return %2 : tensor<16xf32>\n }\n}", "notes": "3-op chain", "dialect": "stablehlo+func"}
30
+ {"id": "30_transpose-add", "difficulty": "medium", "nl": "Transpose a 4x4 f32 tensor then add it back to the original.", "mlir": "module {\n func.func @ta(%a: tensor<4x4xf32>) -> tensor<4x4xf32> {\n %0 = stablehlo.transpose %a, dims = [1, 0] : (tensor<4x4xf32>) -> tensor<4x4xf32>\n %1 = stablehlo.add %0, %a : tensor<4x4xf32>\n return %1 : tensor<4x4xf32>\n }\n}", "notes": "symmetric sum", "dialect": "stablehlo+func"}