Initial upload of NL→MLIR benchmark
Browse files- LICENSE +19 -0
- README.md +96 -0
- croissant.json +110 -0
- data/test.jsonl +200 -0
LICENSE
ADDED
|
@@ -0,0 +1,19 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
Apache License
|
| 2 |
+
Version 2.0, January 2004
|
| 3 |
+
http://www.apache.org/licenses/
|
| 4 |
+
|
| 5 |
+
Licensed under the Apache License, Version 2.0 (the "License");
|
| 6 |
+
you may not use this file except in compliance with the License.
|
| 7 |
+
You may obtain a copy of the License at
|
| 8 |
+
|
| 9 |
+
http://www.apache.org/licenses/LICENSE-2.0
|
| 10 |
+
|
| 11 |
+
Unless required by applicable law or agreed to in writing, software
|
| 12 |
+
distributed under the License is distributed on an "AS IS" BASIS,
|
| 13 |
+
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
| 14 |
+
See the License for the specific language governing permissions and
|
| 15 |
+
limitations under the License.
|
| 16 |
+
|
| 17 |
+
See the full Apache-2.0 text at https://www.apache.org/licenses/LICENSE-2.0.txt
|
| 18 |
+
|
| 19 |
+
Copyright (c) 2026 Anonymous (double-blind submission, NeurIPS 2026 E&D Track).
|
README.md
ADDED
|
@@ -0,0 +1,96 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: apache-2.0
|
| 3 |
+
language:
|
| 4 |
+
- en
|
| 5 |
+
pretty_name: StableHLO-Held-Out-200
|
| 6 |
+
size_categories:
|
| 7 |
+
- n<1K
|
| 8 |
+
task_categories:
|
| 9 |
+
- text-generation
|
| 10 |
+
- text2text-generation
|
| 11 |
+
tags:
|
| 12 |
+
- mlir
|
| 13 |
+
- code-generation
|
| 14 |
+
- compiler
|
| 15 |
+
- constrained-decoding
|
| 16 |
+
- stablehlo
|
| 17 |
+
- held-out
|
| 18 |
+
- programmatic
|
| 19 |
+
configs:
|
| 20 |
+
- config_name: default
|
| 21 |
+
data_files:
|
| 22 |
+
- split: test
|
| 23 |
+
path: data/test.jsonl
|
| 24 |
+
---
|
| 25 |
+
|
| 26 |
+
# StableHLO-Held-Out-200
|
| 27 |
+
|
| 28 |
+
Programmatic parametric sweep over 7 StableHLO op families × 6 dtypes × 3 shape ranks (n=200, verifier-clean).
|
| 29 |
+
|
| 30 |
+
This dataset is one of six NL→MLIR benchmarks released alongside the NeurIPS
|
| 31 |
+
2026 Evaluations & Datasets track paper *Cross-Dialect Generalization Without
|
| 32 |
+
Retraining: Benchmarks and Evaluation of Schema-Derived Constrained Decoding
|
| 33 |
+
for MLIR* (anonymous submission). The full suite — `MLIR-Spec-150`,
|
| 34 |
+
`Linalg-Spec-30`, `StableHLO-Spec-30`, `StableHLO-Held-Out-200`,
|
| 35 |
+
`StableHLO-OutOfGrammar-25`, and `MLIR-Functional-Reference-30` — totals 465
|
| 36 |
+
instances across three MLIR dialects.
|
| 37 |
+
|
| 38 |
+
## Composition
|
| 39 |
+
|
| 40 |
+
- **Instances**: 200
|
| 41 |
+
- **Format**: one JSON record per line in `data/test.jsonl`
|
| 42 |
+
- **Schema**: fields = `dialect`, `difficulty`, `id`, `mlir`, `nl`, `notes`, `source`
|
| 43 |
+
- **Verifier**: `stablehlo-opt v1.4.0` (upstream truth) and `iree-compile --compile-to=input` (substitute, 50/50 concordant on a stratified n=50 sample)
|
| 44 |
+
- **License**: Apache-2.0 (SPDX: Apache-2.0). No third-party IP restrictions.
|
| 45 |
+
|
| 46 |
+
## Loading
|
| 47 |
+
|
| 48 |
+
```python
|
| 49 |
+
from datasets import load_dataset
|
| 50 |
+
ds = load_dataset("plawanrath/StableHLO-Held-Out-200", split="test")
|
| 51 |
+
print(ds[0])
|
| 52 |
+
```
|
| 53 |
+
|
| 54 |
+
Each record is a self-contained natural-language→MLIR pair; verify-valid
|
| 55 |
+
pass-rate under the dialect's verifier is the primary evaluation metric.
|
| 56 |
+
|
| 57 |
+
## Source format
|
| 58 |
+
|
| 59 |
+
For paper reproducibility, individual per-record JSON files (the
|
| 60 |
+
`examples/*.json` layout used by the companion code repository) and the
|
| 61 |
+
MLCommons Croissant 1.0 metadata (`croissant.json`) ship together with the
|
| 62 |
+
release. The JSONL file at `data/test.jsonl` is the canonical HuggingFace
|
| 63 |
+
interface; it is generated 1-to-1 from the source records.
|
| 64 |
+
|
| 65 |
+
## Datasheet
|
| 66 |
+
|
| 67 |
+
A full Gebru-style datasheet covering motivation, collection, preprocessing,
|
| 68 |
+
uses, distribution, and maintenance is included in the companion
|
| 69 |
+
reproducibility archive (`docs/datasheets/datasheet.md`). Key points:
|
| 70 |
+
|
| 71 |
+
- All reference MLIR programs are verifier-clean at the time of release.
|
| 72 |
+
- Hand-authored single-author (no crowdsourcing, no LLM-authored references).
|
| 73 |
+
- Test-only — fine-tuning on these benchmarks contaminates future evaluation
|
| 74 |
+
and is explicitly out of scope.
|
| 75 |
+
|
| 76 |
+
## Companion artifacts
|
| 77 |
+
|
| 78 |
+
- Reproducibility archive (code + scripts): `submission_artifact.tar.gz`
|
| 79 |
+
in the OpenReview attachment / Zenodo mirror.
|
| 80 |
+
- Companion code repository: <will be populated at camera-ready>.
|
| 81 |
+
|
| 82 |
+
## Citation
|
| 83 |
+
|
| 84 |
+
```
|
| 85 |
+
@inproceedings{anonymous2026crossdialect,
|
| 86 |
+
title = {Cross-Dialect Generalization Without Retraining: Benchmarks and Evaluation of Schema-Derived Constrained Decoding for MLIR},
|
| 87 |
+
author = {Anonymous},
|
| 88 |
+
booktitle = {Advances in Neural Information Processing Systems (NeurIPS), Datasets and Benchmarks Track},
|
| 89 |
+
year = {2026},
|
| 90 |
+
note = {Anonymous submission under review.}
|
| 91 |
+
}
|
| 92 |
+
```
|
| 93 |
+
|
| 94 |
+
## License
|
| 95 |
+
|
| 96 |
+
Apache-2.0. See `LICENSE`.
|
croissant.json
ADDED
|
@@ -0,0 +1,110 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"@context": {
|
| 3 |
+
"@language": "en",
|
| 4 |
+
"@vocab": "https://schema.org/",
|
| 5 |
+
"sc": "https://schema.org/",
|
| 6 |
+
"cr": "http://mlcommons.org/croissant/",
|
| 7 |
+
"rai": "http://mlcommons.org/croissant/RAI/",
|
| 8 |
+
"dct": "http://purl.org/dc/terms/",
|
| 9 |
+
"data": {
|
| 10 |
+
"@id": "cr:data",
|
| 11 |
+
"@type": "@json"
|
| 12 |
+
},
|
| 13 |
+
"dataType": {
|
| 14 |
+
"@id": "cr:dataType",
|
| 15 |
+
"@type": "@vocab"
|
| 16 |
+
},
|
| 17 |
+
"examples": {
|
| 18 |
+
"@id": "cr:examples",
|
| 19 |
+
"@type": "@json"
|
| 20 |
+
}
|
| 21 |
+
},
|
| 22 |
+
"@type": "sc:Dataset",
|
| 23 |
+
"name": "StableHLO-Held-Out-200",
|
| 24 |
+
"description": "Programmatically generated StableHLO programs via parametric sweep over 7 op families \u00d7 6 dtypes \u00d7 3 shape ranks \u00d7 multi-op compositions (585 candidates); kept only those that iree-compile accepts.",
|
| 25 |
+
"conformsTo": "http://mlcommons.org/croissant/1.0",
|
| 26 |
+
"license": "https://spdx.org/licenses/Apache-2.0.html",
|
| 27 |
+
"version": "1.0.0",
|
| 28 |
+
"datePublished": "2026-04-21",
|
| 29 |
+
"citeAs": "(anonymous submission to NeurIPS 2026 E&D track)",
|
| 30 |
+
"url": "<populated-at-camera-ready>",
|
| 31 |
+
"distribution": [
|
| 32 |
+
{
|
| 33 |
+
"@type": "cr:FileObject",
|
| 34 |
+
"@id": "StableHLO-Held-Out-200-archive",
|
| 35 |
+
"name": "StableHLO-Held-Out-200.zip",
|
| 36 |
+
"contentUrl": "<populated-at-camera-ready>",
|
| 37 |
+
"encodingFormat": "application/zip",
|
| 38 |
+
"sha256": "<populated-at-camera-ready>"
|
| 39 |
+
}
|
| 40 |
+
],
|
| 41 |
+
"recordSet": [
|
| 42 |
+
{
|
| 43 |
+
"@type": "cr:RecordSet",
|
| 44 |
+
"@id": "records",
|
| 45 |
+
"name": "records",
|
| 46 |
+
"description": "One MLIR prompt/reference pair per record.",
|
| 47 |
+
"field": [
|
| 48 |
+
{
|
| 49 |
+
"@type": "cr:Field",
|
| 50 |
+
"@id": "records/id",
|
| 51 |
+
"name": "id",
|
| 52 |
+
"dataType": "sc:Text",
|
| 53 |
+
"description": "Unique record identifier."
|
| 54 |
+
},
|
| 55 |
+
{
|
| 56 |
+
"@type": "cr:Field",
|
| 57 |
+
"@id": "records/nl",
|
| 58 |
+
"name": "nl",
|
| 59 |
+
"dataType": "sc:Text",
|
| 60 |
+
"description": "Natural-language description."
|
| 61 |
+
},
|
| 62 |
+
{
|
| 63 |
+
"@type": "cr:Field",
|
| 64 |
+
"@id": "records/mlir",
|
| 65 |
+
"name": "mlir",
|
| 66 |
+
"dataType": "sc:Text",
|
| 67 |
+
"description": "Reference MLIR that verifies under mlir-opt/iree-compile."
|
| 68 |
+
},
|
| 69 |
+
{
|
| 70 |
+
"@type": "cr:Field",
|
| 71 |
+
"@id": "records/dialect",
|
| 72 |
+
"name": "dialect",
|
| 73 |
+
"dataType": "sc:Text",
|
| 74 |
+
"description": "MLIR dialect of the reference program."
|
| 75 |
+
},
|
| 76 |
+
{
|
| 77 |
+
"@type": "cr:Field",
|
| 78 |
+
"@id": "records/difficulty",
|
| 79 |
+
"name": "difficulty",
|
| 80 |
+
"dataType": "sc:Text",
|
| 81 |
+
"description": "Author-assigned difficulty or 'programmatic'."
|
| 82 |
+
}
|
| 83 |
+
]
|
| 84 |
+
}
|
| 85 |
+
],
|
| 86 |
+
"rai:dataCollection": "Generated by parametric sweep over the target dialect's opset and filtered by verifier acceptance; no sampling from crawled corpora.",
|
| 87 |
+
"rai:dataBiases": [
|
| 88 |
+
"Parametric-sweep: coverage is uniform over the enumerated op \u00d7 dtype \u00d7 shape space; biases toward simple single-op programs after iree-compile filtering.",
|
| 89 |
+
"No human-subject data; no PII; no demographic bias dimensions apply."
|
| 90 |
+
],
|
| 91 |
+
"rai:dataLimitations": [
|
| 92 |
+
"Verify-valid pass-rate measures structural validity under mlir-opt/iree-compile, not functional correctness. Programs that pass the gate may still compute the wrong function.",
|
| 93 |
+
"English natural-language descriptions only.",
|
| 94 |
+
"Small n (30-200 prompts per dataset) yields CI half-widths of ~3-10pp at p=0.5."
|
| 95 |
+
],
|
| 96 |
+
"rai:annotationsPerExample": 0,
|
| 97 |
+
"rai:annotationDemographics": "N/A \u2014 no human annotators.",
|
| 98 |
+
"rai:personalSensitiveInformation": "None.",
|
| 99 |
+
"rai:useCases": [
|
| 100 |
+
"Evaluating NL\u2192MLIR generation systems (constrained or unconstrained) under a verifier-based pass-rate metric."
|
| 101 |
+
],
|
| 102 |
+
"rai:excludedUseCases": [
|
| 103 |
+
"Evaluating functional correctness without an additional lowering + execution harness.",
|
| 104 |
+
"Training or fine-tuning production code-generation models without a separate held-out corpus."
|
| 105 |
+
],
|
| 106 |
+
"extra": {
|
| 107 |
+
"size": 200,
|
| 108 |
+
"sampling": "Parametric sweep, verifier-filtered."
|
| 109 |
+
}
|
| 110 |
+
}
|
data/test.jsonl
ADDED
|
@@ -0,0 +1,200 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{"id": "001_ew_bin_001_add_f16_8", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.add elementwise to two f16 tensors of shape 8.", "mlir": "module {\n func.func @f(%a: tensor<8xf16>, %b: tensor<8xf16>) -> tensor<8xf16> {\n %0 = stablehlo.add %a, %b : tensor<8xf16>\n return %0 : tensor<8xf16>\n }\n}", "notes": "add f16 8", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 2 |
+
{"id": "002_ew_bin_002_add_f16_16", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.add elementwise to two f16 tensors of shape 16.", "mlir": "module {\n func.func @f(%a: tensor<16xf16>, %b: tensor<16xf16>) -> tensor<16xf16> {\n %0 = stablehlo.add %a, %b : tensor<16xf16>\n return %0 : tensor<16xf16>\n }\n}", "notes": "add f16 16", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 3 |
+
{"id": "003_ew_bin_003_add_f16_32", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.add elementwise to two f16 tensors of shape 32.", "mlir": "module {\n func.func @f(%a: tensor<32xf16>, %b: tensor<32xf16>) -> tensor<32xf16> {\n %0 = stablehlo.add %a, %b : tensor<32xf16>\n return %0 : tensor<32xf16>\n }\n}", "notes": "add f16 32", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 4 |
+
{"id": "004_ew_bin_004_add_f16_64", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.add elementwise to two f16 tensors of shape 64.", "mlir": "module {\n func.func @f(%a: tensor<64xf16>, %b: tensor<64xf16>) -> tensor<64xf16> {\n %0 = stablehlo.add %a, %b : tensor<64xf16>\n return %0 : tensor<64xf16>\n }\n}", "notes": "add f16 64", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 5 |
+
{"id": "005_ew_bin_005_add_f16_4x4", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.add elementwise to two f16 tensors of shape 4x4.", "mlir": "module {\n func.func @f(%a: tensor<4x4xf16>, %b: tensor<4x4xf16>) -> tensor<4x4xf16> {\n %0 = stablehlo.add %a, %b : tensor<4x4xf16>\n return %0 : tensor<4x4xf16>\n }\n}", "notes": "add f16 4x4", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 6 |
+
{"id": "006_ew_bin_006_add_f16_8x8", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.add elementwise to two f16 tensors of shape 8x8.", "mlir": "module {\n func.func @f(%a: tensor<8x8xf16>, %b: tensor<8x8xf16>) -> tensor<8x8xf16> {\n %0 = stablehlo.add %a, %b : tensor<8x8xf16>\n return %0 : tensor<8x8xf16>\n }\n}", "notes": "add f16 8x8", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 7 |
+
{"id": "007_ew_bin_007_add_f16_8x16", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.add elementwise to two f16 tensors of shape 8x16.", "mlir": "module {\n func.func @f(%a: tensor<8x16xf16>, %b: tensor<8x16xf16>) -> tensor<8x16xf16> {\n %0 = stablehlo.add %a, %b : tensor<8x16xf16>\n return %0 : tensor<8x16xf16>\n }\n}", "notes": "add f16 8x16", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 8 |
+
{"id": "008_ew_bin_008_add_f16_4x4x4", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.add elementwise to two f16 tensors of shape 4x4x4.", "mlir": "module {\n func.func @f(%a: tensor<4x4x4xf16>, %b: tensor<4x4x4xf16>) -> tensor<4x4x4xf16> {\n %0 = stablehlo.add %a, %b : tensor<4x4x4xf16>\n return %0 : tensor<4x4x4xf16>\n }\n}", "notes": "add f16 4x4x4", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 9 |
+
{"id": "009_ew_bin_009_add_f16_2x8x8", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.add elementwise to two f16 tensors of shape 2x8x8.", "mlir": "module {\n func.func @f(%a: tensor<2x8x8xf16>, %b: tensor<2x8x8xf16>) -> tensor<2x8x8xf16> {\n %0 = stablehlo.add %a, %b : tensor<2x8x8xf16>\n return %0 : tensor<2x8x8xf16>\n }\n}", "notes": "add f16 2x8x8", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 10 |
+
{"id": "010_ew_bin_010_add_f32_8", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.add elementwise to two f32 tensors of shape 8.", "mlir": "module {\n func.func @f(%a: tensor<8xf32>, %b: tensor<8xf32>) -> tensor<8xf32> {\n %0 = stablehlo.add %a, %b : tensor<8xf32>\n return %0 : tensor<8xf32>\n }\n}", "notes": "add f32 8", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 11 |
+
{"id": "011_ew_bin_011_add_f32_16", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.add elementwise to two f32 tensors of shape 16.", "mlir": "module {\n func.func @f(%a: tensor<16xf32>, %b: tensor<16xf32>) -> tensor<16xf32> {\n %0 = stablehlo.add %a, %b : tensor<16xf32>\n return %0 : tensor<16xf32>\n }\n}", "notes": "add f32 16", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 12 |
+
{"id": "012_ew_bin_012_add_f32_32", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.add elementwise to two f32 tensors of shape 32.", "mlir": "module {\n func.func @f(%a: tensor<32xf32>, %b: tensor<32xf32>) -> tensor<32xf32> {\n %0 = stablehlo.add %a, %b : tensor<32xf32>\n return %0 : tensor<32xf32>\n }\n}", "notes": "add f32 32", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 13 |
+
{"id": "013_ew_bin_013_add_f32_64", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.add elementwise to two f32 tensors of shape 64.", "mlir": "module {\n func.func @f(%a: tensor<64xf32>, %b: tensor<64xf32>) -> tensor<64xf32> {\n %0 = stablehlo.add %a, %b : tensor<64xf32>\n return %0 : tensor<64xf32>\n }\n}", "notes": "add f32 64", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 14 |
+
{"id": "014_ew_bin_014_add_f32_4x4", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.add elementwise to two f32 tensors of shape 4x4.", "mlir": "module {\n func.func @f(%a: tensor<4x4xf32>, %b: tensor<4x4xf32>) -> tensor<4x4xf32> {\n %0 = stablehlo.add %a, %b : tensor<4x4xf32>\n return %0 : tensor<4x4xf32>\n }\n}", "notes": "add f32 4x4", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 15 |
+
{"id": "015_ew_bin_015_add_f32_8x8", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.add elementwise to two f32 tensors of shape 8x8.", "mlir": "module {\n func.func @f(%a: tensor<8x8xf32>, %b: tensor<8x8xf32>) -> tensor<8x8xf32> {\n %0 = stablehlo.add %a, %b : tensor<8x8xf32>\n return %0 : tensor<8x8xf32>\n }\n}", "notes": "add f32 8x8", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 16 |
+
{"id": "016_ew_bin_016_add_f32_8x16", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.add elementwise to two f32 tensors of shape 8x16.", "mlir": "module {\n func.func @f(%a: tensor<8x16xf32>, %b: tensor<8x16xf32>) -> tensor<8x16xf32> {\n %0 = stablehlo.add %a, %b : tensor<8x16xf32>\n return %0 : tensor<8x16xf32>\n }\n}", "notes": "add f32 8x16", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 17 |
+
{"id": "017_ew_bin_017_add_f32_4x4x4", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.add elementwise to two f32 tensors of shape 4x4x4.", "mlir": "module {\n func.func @f(%a: tensor<4x4x4xf32>, %b: tensor<4x4x4xf32>) -> tensor<4x4x4xf32> {\n %0 = stablehlo.add %a, %b : tensor<4x4x4xf32>\n return %0 : tensor<4x4x4xf32>\n }\n}", "notes": "add f32 4x4x4", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 18 |
+
{"id": "018_ew_bin_018_add_f32_2x8x8", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.add elementwise to two f32 tensors of shape 2x8x8.", "mlir": "module {\n func.func @f(%a: tensor<2x8x8xf32>, %b: tensor<2x8x8xf32>) -> tensor<2x8x8xf32> {\n %0 = stablehlo.add %a, %b : tensor<2x8x8xf32>\n return %0 : tensor<2x8x8xf32>\n }\n}", "notes": "add f32 2x8x8", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 19 |
+
{"id": "019_ew_bin_019_add_f64_8", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.add elementwise to two f64 tensors of shape 8.", "mlir": "module {\n func.func @f(%a: tensor<8xf64>, %b: tensor<8xf64>) -> tensor<8xf64> {\n %0 = stablehlo.add %a, %b : tensor<8xf64>\n return %0 : tensor<8xf64>\n }\n}", "notes": "add f64 8", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 20 |
+
{"id": "020_ew_bin_020_add_f64_16", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.add elementwise to two f64 tensors of shape 16.", "mlir": "module {\n func.func @f(%a: tensor<16xf64>, %b: tensor<16xf64>) -> tensor<16xf64> {\n %0 = stablehlo.add %a, %b : tensor<16xf64>\n return %0 : tensor<16xf64>\n }\n}", "notes": "add f64 16", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 21 |
+
{"id": "021_ew_bin_021_add_f64_32", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.add elementwise to two f64 tensors of shape 32.", "mlir": "module {\n func.func @f(%a: tensor<32xf64>, %b: tensor<32xf64>) -> tensor<32xf64> {\n %0 = stablehlo.add %a, %b : tensor<32xf64>\n return %0 : tensor<32xf64>\n }\n}", "notes": "add f64 32", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 22 |
+
{"id": "022_ew_bin_022_add_f64_64", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.add elementwise to two f64 tensors of shape 64.", "mlir": "module {\n func.func @f(%a: tensor<64xf64>, %b: tensor<64xf64>) -> tensor<64xf64> {\n %0 = stablehlo.add %a, %b : tensor<64xf64>\n return %0 : tensor<64xf64>\n }\n}", "notes": "add f64 64", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 23 |
+
{"id": "023_ew_bin_023_add_f64_4x4", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.add elementwise to two f64 tensors of shape 4x4.", "mlir": "module {\n func.func @f(%a: tensor<4x4xf64>, %b: tensor<4x4xf64>) -> tensor<4x4xf64> {\n %0 = stablehlo.add %a, %b : tensor<4x4xf64>\n return %0 : tensor<4x4xf64>\n }\n}", "notes": "add f64 4x4", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 24 |
+
{"id": "024_ew_bin_024_add_f64_8x8", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.add elementwise to two f64 tensors of shape 8x8.", "mlir": "module {\n func.func @f(%a: tensor<8x8xf64>, %b: tensor<8x8xf64>) -> tensor<8x8xf64> {\n %0 = stablehlo.add %a, %b : tensor<8x8xf64>\n return %0 : tensor<8x8xf64>\n }\n}", "notes": "add f64 8x8", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 25 |
+
{"id": "025_ew_bin_025_add_f64_8x16", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.add elementwise to two f64 tensors of shape 8x16.", "mlir": "module {\n func.func @f(%a: tensor<8x16xf64>, %b: tensor<8x16xf64>) -> tensor<8x16xf64> {\n %0 = stablehlo.add %a, %b : tensor<8x16xf64>\n return %0 : tensor<8x16xf64>\n }\n}", "notes": "add f64 8x16", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 26 |
+
{"id": "026_ew_bin_026_add_f64_4x4x4", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.add elementwise to two f64 tensors of shape 4x4x4.", "mlir": "module {\n func.func @f(%a: tensor<4x4x4xf64>, %b: tensor<4x4x4xf64>) -> tensor<4x4x4xf64> {\n %0 = stablehlo.add %a, %b : tensor<4x4x4xf64>\n return %0 : tensor<4x4x4xf64>\n }\n}", "notes": "add f64 4x4x4", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 27 |
+
{"id": "027_ew_bin_027_add_f64_2x8x8", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.add elementwise to two f64 tensors of shape 2x8x8.", "mlir": "module {\n func.func @f(%a: tensor<2x8x8xf64>, %b: tensor<2x8x8xf64>) -> tensor<2x8x8xf64> {\n %0 = stablehlo.add %a, %b : tensor<2x8x8xf64>\n return %0 : tensor<2x8x8xf64>\n }\n}", "notes": "add f64 2x8x8", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 28 |
+
{"id": "028_ew_bin_028_add_i8_8", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.add elementwise to two i8 tensors of shape 8.", "mlir": "module {\n func.func @f(%a: tensor<8xi8>, %b: tensor<8xi8>) -> tensor<8xi8> {\n %0 = stablehlo.add %a, %b : tensor<8xi8>\n return %0 : tensor<8xi8>\n }\n}", "notes": "add i8 8", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 29 |
+
{"id": "029_ew_bin_029_add_i8_16", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.add elementwise to two i8 tensors of shape 16.", "mlir": "module {\n func.func @f(%a: tensor<16xi8>, %b: tensor<16xi8>) -> tensor<16xi8> {\n %0 = stablehlo.add %a, %b : tensor<16xi8>\n return %0 : tensor<16xi8>\n }\n}", "notes": "add i8 16", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 30 |
+
{"id": "030_ew_bin_030_add_i8_32", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.add elementwise to two i8 tensors of shape 32.", "mlir": "module {\n func.func @f(%a: tensor<32xi8>, %b: tensor<32xi8>) -> tensor<32xi8> {\n %0 = stablehlo.add %a, %b : tensor<32xi8>\n return %0 : tensor<32xi8>\n }\n}", "notes": "add i8 32", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 31 |
+
{"id": "031_ew_bin_031_add_i8_64", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.add elementwise to two i8 tensors of shape 64.", "mlir": "module {\n func.func @f(%a: tensor<64xi8>, %b: tensor<64xi8>) -> tensor<64xi8> {\n %0 = stablehlo.add %a, %b : tensor<64xi8>\n return %0 : tensor<64xi8>\n }\n}", "notes": "add i8 64", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 32 |
+
{"id": "032_ew_bin_032_add_i8_4x4", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.add elementwise to two i8 tensors of shape 4x4.", "mlir": "module {\n func.func @f(%a: tensor<4x4xi8>, %b: tensor<4x4xi8>) -> tensor<4x4xi8> {\n %0 = stablehlo.add %a, %b : tensor<4x4xi8>\n return %0 : tensor<4x4xi8>\n }\n}", "notes": "add i8 4x4", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 33 |
+
{"id": "033_ew_bin_033_add_i8_8x8", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.add elementwise to two i8 tensors of shape 8x8.", "mlir": "module {\n func.func @f(%a: tensor<8x8xi8>, %b: tensor<8x8xi8>) -> tensor<8x8xi8> {\n %0 = stablehlo.add %a, %b : tensor<8x8xi8>\n return %0 : tensor<8x8xi8>\n }\n}", "notes": "add i8 8x8", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 34 |
+
{"id": "034_ew_bin_034_add_i8_8x16", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.add elementwise to two i8 tensors of shape 8x16.", "mlir": "module {\n func.func @f(%a: tensor<8x16xi8>, %b: tensor<8x16xi8>) -> tensor<8x16xi8> {\n %0 = stablehlo.add %a, %b : tensor<8x16xi8>\n return %0 : tensor<8x16xi8>\n }\n}", "notes": "add i8 8x16", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 35 |
+
{"id": "035_ew_bin_035_add_i8_4x4x4", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.add elementwise to two i8 tensors of shape 4x4x4.", "mlir": "module {\n func.func @f(%a: tensor<4x4x4xi8>, %b: tensor<4x4x4xi8>) -> tensor<4x4x4xi8> {\n %0 = stablehlo.add %a, %b : tensor<4x4x4xi8>\n return %0 : tensor<4x4x4xi8>\n }\n}", "notes": "add i8 4x4x4", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 36 |
+
{"id": "036_ew_bin_036_add_i8_2x8x8", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.add elementwise to two i8 tensors of shape 2x8x8.", "mlir": "module {\n func.func @f(%a: tensor<2x8x8xi8>, %b: tensor<2x8x8xi8>) -> tensor<2x8x8xi8> {\n %0 = stablehlo.add %a, %b : tensor<2x8x8xi8>\n return %0 : tensor<2x8x8xi8>\n }\n}", "notes": "add i8 2x8x8", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 37 |
+
{"id": "037_ew_bin_037_add_i32_8", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.add elementwise to two i32 tensors of shape 8.", "mlir": "module {\n func.func @f(%a: tensor<8xi32>, %b: tensor<8xi32>) -> tensor<8xi32> {\n %0 = stablehlo.add %a, %b : tensor<8xi32>\n return %0 : tensor<8xi32>\n }\n}", "notes": "add i32 8", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 38 |
+
{"id": "038_ew_bin_038_add_i32_16", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.add elementwise to two i32 tensors of shape 16.", "mlir": "module {\n func.func @f(%a: tensor<16xi32>, %b: tensor<16xi32>) -> tensor<16xi32> {\n %0 = stablehlo.add %a, %b : tensor<16xi32>\n return %0 : tensor<16xi32>\n }\n}", "notes": "add i32 16", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 39 |
+
{"id": "039_ew_bin_039_add_i32_32", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.add elementwise to two i32 tensors of shape 32.", "mlir": "module {\n func.func @f(%a: tensor<32xi32>, %b: tensor<32xi32>) -> tensor<32xi32> {\n %0 = stablehlo.add %a, %b : tensor<32xi32>\n return %0 : tensor<32xi32>\n }\n}", "notes": "add i32 32", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 40 |
+
{"id": "040_ew_bin_040_add_i32_64", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.add elementwise to two i32 tensors of shape 64.", "mlir": "module {\n func.func @f(%a: tensor<64xi32>, %b: tensor<64xi32>) -> tensor<64xi32> {\n %0 = stablehlo.add %a, %b : tensor<64xi32>\n return %0 : tensor<64xi32>\n }\n}", "notes": "add i32 64", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 41 |
+
{"id": "041_ew_bin_041_add_i32_4x4", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.add elementwise to two i32 tensors of shape 4x4.", "mlir": "module {\n func.func @f(%a: tensor<4x4xi32>, %b: tensor<4x4xi32>) -> tensor<4x4xi32> {\n %0 = stablehlo.add %a, %b : tensor<4x4xi32>\n return %0 : tensor<4x4xi32>\n }\n}", "notes": "add i32 4x4", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 42 |
+
{"id": "042_ew_bin_042_add_i32_8x8", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.add elementwise to two i32 tensors of shape 8x8.", "mlir": "module {\n func.func @f(%a: tensor<8x8xi32>, %b: tensor<8x8xi32>) -> tensor<8x8xi32> {\n %0 = stablehlo.add %a, %b : tensor<8x8xi32>\n return %0 : tensor<8x8xi32>\n }\n}", "notes": "add i32 8x8", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 43 |
+
{"id": "043_ew_bin_043_add_i32_8x16", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.add elementwise to two i32 tensors of shape 8x16.", "mlir": "module {\n func.func @f(%a: tensor<8x16xi32>, %b: tensor<8x16xi32>) -> tensor<8x16xi32> {\n %0 = stablehlo.add %a, %b : tensor<8x16xi32>\n return %0 : tensor<8x16xi32>\n }\n}", "notes": "add i32 8x16", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 44 |
+
{"id": "044_ew_bin_044_add_i32_4x4x4", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.add elementwise to two i32 tensors of shape 4x4x4.", "mlir": "module {\n func.func @f(%a: tensor<4x4x4xi32>, %b: tensor<4x4x4xi32>) -> tensor<4x4x4xi32> {\n %0 = stablehlo.add %a, %b : tensor<4x4x4xi32>\n return %0 : tensor<4x4x4xi32>\n }\n}", "notes": "add i32 4x4x4", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 45 |
+
{"id": "045_ew_bin_045_add_i32_2x8x8", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.add elementwise to two i32 tensors of shape 2x8x8.", "mlir": "module {\n func.func @f(%a: tensor<2x8x8xi32>, %b: tensor<2x8x8xi32>) -> tensor<2x8x8xi32> {\n %0 = stablehlo.add %a, %b : tensor<2x8x8xi32>\n return %0 : tensor<2x8x8xi32>\n }\n}", "notes": "add i32 2x8x8", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 46 |
+
{"id": "046_ew_bin_046_add_i64_8", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.add elementwise to two i64 tensors of shape 8.", "mlir": "module {\n func.func @f(%a: tensor<8xi64>, %b: tensor<8xi64>) -> tensor<8xi64> {\n %0 = stablehlo.add %a, %b : tensor<8xi64>\n return %0 : tensor<8xi64>\n }\n}", "notes": "add i64 8", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 47 |
+
{"id": "047_ew_bin_047_add_i64_16", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.add elementwise to two i64 tensors of shape 16.", "mlir": "module {\n func.func @f(%a: tensor<16xi64>, %b: tensor<16xi64>) -> tensor<16xi64> {\n %0 = stablehlo.add %a, %b : tensor<16xi64>\n return %0 : tensor<16xi64>\n }\n}", "notes": "add i64 16", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 48 |
+
{"id": "048_ew_bin_048_add_i64_32", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.add elementwise to two i64 tensors of shape 32.", "mlir": "module {\n func.func @f(%a: tensor<32xi64>, %b: tensor<32xi64>) -> tensor<32xi64> {\n %0 = stablehlo.add %a, %b : tensor<32xi64>\n return %0 : tensor<32xi64>\n }\n}", "notes": "add i64 32", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 49 |
+
{"id": "049_ew_bin_049_add_i64_64", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.add elementwise to two i64 tensors of shape 64.", "mlir": "module {\n func.func @f(%a: tensor<64xi64>, %b: tensor<64xi64>) -> tensor<64xi64> {\n %0 = stablehlo.add %a, %b : tensor<64xi64>\n return %0 : tensor<64xi64>\n }\n}", "notes": "add i64 64", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 50 |
+
{"id": "050_ew_bin_050_add_i64_4x4", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.add elementwise to two i64 tensors of shape 4x4.", "mlir": "module {\n func.func @f(%a: tensor<4x4xi64>, %b: tensor<4x4xi64>) -> tensor<4x4xi64> {\n %0 = stablehlo.add %a, %b : tensor<4x4xi64>\n return %0 : tensor<4x4xi64>\n }\n}", "notes": "add i64 4x4", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 51 |
+
{"id": "051_ew_bin_051_add_i64_8x8", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.add elementwise to two i64 tensors of shape 8x8.", "mlir": "module {\n func.func @f(%a: tensor<8x8xi64>, %b: tensor<8x8xi64>) -> tensor<8x8xi64> {\n %0 = stablehlo.add %a, %b : tensor<8x8xi64>\n return %0 : tensor<8x8xi64>\n }\n}", "notes": "add i64 8x8", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 52 |
+
{"id": "052_ew_bin_052_add_i64_8x16", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.add elementwise to two i64 tensors of shape 8x16.", "mlir": "module {\n func.func @f(%a: tensor<8x16xi64>, %b: tensor<8x16xi64>) -> tensor<8x16xi64> {\n %0 = stablehlo.add %a, %b : tensor<8x16xi64>\n return %0 : tensor<8x16xi64>\n }\n}", "notes": "add i64 8x16", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 53 |
+
{"id": "053_ew_bin_053_add_i64_4x4x4", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.add elementwise to two i64 tensors of shape 4x4x4.", "mlir": "module {\n func.func @f(%a: tensor<4x4x4xi64>, %b: tensor<4x4x4xi64>) -> tensor<4x4x4xi64> {\n %0 = stablehlo.add %a, %b : tensor<4x4x4xi64>\n return %0 : tensor<4x4x4xi64>\n }\n}", "notes": "add i64 4x4x4", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 54 |
+
{"id": "054_ew_bin_054_add_i64_2x8x8", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.add elementwise to two i64 tensors of shape 2x8x8.", "mlir": "module {\n func.func @f(%a: tensor<2x8x8xi64>, %b: tensor<2x8x8xi64>) -> tensor<2x8x8xi64> {\n %0 = stablehlo.add %a, %b : tensor<2x8x8xi64>\n return %0 : tensor<2x8x8xi64>\n }\n}", "notes": "add i64 2x8x8", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 55 |
+
{"id": "055_ew_bin_055_subtract_f16_8", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.subtract elementwise to two f16 tensors of shape 8.", "mlir": "module {\n func.func @f(%a: tensor<8xf16>, %b: tensor<8xf16>) -> tensor<8xf16> {\n %0 = stablehlo.subtract %a, %b : tensor<8xf16>\n return %0 : tensor<8xf16>\n }\n}", "notes": "subtract f16 8", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 56 |
+
{"id": "056_ew_bin_056_subtract_f16_16", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.subtract elementwise to two f16 tensors of shape 16.", "mlir": "module {\n func.func @f(%a: tensor<16xf16>, %b: tensor<16xf16>) -> tensor<16xf16> {\n %0 = stablehlo.subtract %a, %b : tensor<16xf16>\n return %0 : tensor<16xf16>\n }\n}", "notes": "subtract f16 16", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 57 |
+
{"id": "057_ew_bin_057_subtract_f16_32", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.subtract elementwise to two f16 tensors of shape 32.", "mlir": "module {\n func.func @f(%a: tensor<32xf16>, %b: tensor<32xf16>) -> tensor<32xf16> {\n %0 = stablehlo.subtract %a, %b : tensor<32xf16>\n return %0 : tensor<32xf16>\n }\n}", "notes": "subtract f16 32", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 58 |
+
{"id": "058_ew_bin_058_subtract_f16_64", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.subtract elementwise to two f16 tensors of shape 64.", "mlir": "module {\n func.func @f(%a: tensor<64xf16>, %b: tensor<64xf16>) -> tensor<64xf16> {\n %0 = stablehlo.subtract %a, %b : tensor<64xf16>\n return %0 : tensor<64xf16>\n }\n}", "notes": "subtract f16 64", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 59 |
+
{"id": "059_ew_bin_059_subtract_f16_4x4", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.subtract elementwise to two f16 tensors of shape 4x4.", "mlir": "module {\n func.func @f(%a: tensor<4x4xf16>, %b: tensor<4x4xf16>) -> tensor<4x4xf16> {\n %0 = stablehlo.subtract %a, %b : tensor<4x4xf16>\n return %0 : tensor<4x4xf16>\n }\n}", "notes": "subtract f16 4x4", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 60 |
+
{"id": "060_ew_bin_060_subtract_f16_8x8", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.subtract elementwise to two f16 tensors of shape 8x8.", "mlir": "module {\n func.func @f(%a: tensor<8x8xf16>, %b: tensor<8x8xf16>) -> tensor<8x8xf16> {\n %0 = stablehlo.subtract %a, %b : tensor<8x8xf16>\n return %0 : tensor<8x8xf16>\n }\n}", "notes": "subtract f16 8x8", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 61 |
+
{"id": "061_ew_bin_061_subtract_f16_8x16", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.subtract elementwise to two f16 tensors of shape 8x16.", "mlir": "module {\n func.func @f(%a: tensor<8x16xf16>, %b: tensor<8x16xf16>) -> tensor<8x16xf16> {\n %0 = stablehlo.subtract %a, %b : tensor<8x16xf16>\n return %0 : tensor<8x16xf16>\n }\n}", "notes": "subtract f16 8x16", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 62 |
+
{"id": "062_ew_bin_062_subtract_f16_4x4x4", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.subtract elementwise to two f16 tensors of shape 4x4x4.", "mlir": "module {\n func.func @f(%a: tensor<4x4x4xf16>, %b: tensor<4x4x4xf16>) -> tensor<4x4x4xf16> {\n %0 = stablehlo.subtract %a, %b : tensor<4x4x4xf16>\n return %0 : tensor<4x4x4xf16>\n }\n}", "notes": "subtract f16 4x4x4", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 63 |
+
{"id": "063_ew_bin_063_subtract_f16_2x8x8", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.subtract elementwise to two f16 tensors of shape 2x8x8.", "mlir": "module {\n func.func @f(%a: tensor<2x8x8xf16>, %b: tensor<2x8x8xf16>) -> tensor<2x8x8xf16> {\n %0 = stablehlo.subtract %a, %b : tensor<2x8x8xf16>\n return %0 : tensor<2x8x8xf16>\n }\n}", "notes": "subtract f16 2x8x8", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 64 |
+
{"id": "064_ew_bin_064_subtract_f32_8", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.subtract elementwise to two f32 tensors of shape 8.", "mlir": "module {\n func.func @f(%a: tensor<8xf32>, %b: tensor<8xf32>) -> tensor<8xf32> {\n %0 = stablehlo.subtract %a, %b : tensor<8xf32>\n return %0 : tensor<8xf32>\n }\n}", "notes": "subtract f32 8", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 65 |
+
{"id": "065_ew_bin_065_subtract_f32_16", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.subtract elementwise to two f32 tensors of shape 16.", "mlir": "module {\n func.func @f(%a: tensor<16xf32>, %b: tensor<16xf32>) -> tensor<16xf32> {\n %0 = stablehlo.subtract %a, %b : tensor<16xf32>\n return %0 : tensor<16xf32>\n }\n}", "notes": "subtract f32 16", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 66 |
+
{"id": "066_ew_bin_066_subtract_f32_32", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.subtract elementwise to two f32 tensors of shape 32.", "mlir": "module {\n func.func @f(%a: tensor<32xf32>, %b: tensor<32xf32>) -> tensor<32xf32> {\n %0 = stablehlo.subtract %a, %b : tensor<32xf32>\n return %0 : tensor<32xf32>\n }\n}", "notes": "subtract f32 32", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 67 |
+
{"id": "067_ew_bin_067_subtract_f32_64", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.subtract elementwise to two f32 tensors of shape 64.", "mlir": "module {\n func.func @f(%a: tensor<64xf32>, %b: tensor<64xf32>) -> tensor<64xf32> {\n %0 = stablehlo.subtract %a, %b : tensor<64xf32>\n return %0 : tensor<64xf32>\n }\n}", "notes": "subtract f32 64", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 68 |
+
{"id": "068_ew_bin_068_subtract_f32_4x4", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.subtract elementwise to two f32 tensors of shape 4x4.", "mlir": "module {\n func.func @f(%a: tensor<4x4xf32>, %b: tensor<4x4xf32>) -> tensor<4x4xf32> {\n %0 = stablehlo.subtract %a, %b : tensor<4x4xf32>\n return %0 : tensor<4x4xf32>\n }\n}", "notes": "subtract f32 4x4", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 69 |
+
{"id": "069_ew_bin_069_subtract_f32_8x8", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.subtract elementwise to two f32 tensors of shape 8x8.", "mlir": "module {\n func.func @f(%a: tensor<8x8xf32>, %b: tensor<8x8xf32>) -> tensor<8x8xf32> {\n %0 = stablehlo.subtract %a, %b : tensor<8x8xf32>\n return %0 : tensor<8x8xf32>\n }\n}", "notes": "subtract f32 8x8", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 70 |
+
{"id": "070_ew_bin_070_subtract_f32_8x16", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.subtract elementwise to two f32 tensors of shape 8x16.", "mlir": "module {\n func.func @f(%a: tensor<8x16xf32>, %b: tensor<8x16xf32>) -> tensor<8x16xf32> {\n %0 = stablehlo.subtract %a, %b : tensor<8x16xf32>\n return %0 : tensor<8x16xf32>\n }\n}", "notes": "subtract f32 8x16", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 71 |
+
{"id": "071_ew_bin_071_subtract_f32_4x4x4", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.subtract elementwise to two f32 tensors of shape 4x4x4.", "mlir": "module {\n func.func @f(%a: tensor<4x4x4xf32>, %b: tensor<4x4x4xf32>) -> tensor<4x4x4xf32> {\n %0 = stablehlo.subtract %a, %b : tensor<4x4x4xf32>\n return %0 : tensor<4x4x4xf32>\n }\n}", "notes": "subtract f32 4x4x4", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 72 |
+
{"id": "072_ew_bin_072_subtract_f32_2x8x8", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.subtract elementwise to two f32 tensors of shape 2x8x8.", "mlir": "module {\n func.func @f(%a: tensor<2x8x8xf32>, %b: tensor<2x8x8xf32>) -> tensor<2x8x8xf32> {\n %0 = stablehlo.subtract %a, %b : tensor<2x8x8xf32>\n return %0 : tensor<2x8x8xf32>\n }\n}", "notes": "subtract f32 2x8x8", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 73 |
+
{"id": "073_ew_bin_073_subtract_f64_8", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.subtract elementwise to two f64 tensors of shape 8.", "mlir": "module {\n func.func @f(%a: tensor<8xf64>, %b: tensor<8xf64>) -> tensor<8xf64> {\n %0 = stablehlo.subtract %a, %b : tensor<8xf64>\n return %0 : tensor<8xf64>\n }\n}", "notes": "subtract f64 8", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 74 |
+
{"id": "074_ew_bin_074_subtract_f64_16", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.subtract elementwise to two f64 tensors of shape 16.", "mlir": "module {\n func.func @f(%a: tensor<16xf64>, %b: tensor<16xf64>) -> tensor<16xf64> {\n %0 = stablehlo.subtract %a, %b : tensor<16xf64>\n return %0 : tensor<16xf64>\n }\n}", "notes": "subtract f64 16", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 75 |
+
{"id": "075_ew_bin_075_subtract_f64_32", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.subtract elementwise to two f64 tensors of shape 32.", "mlir": "module {\n func.func @f(%a: tensor<32xf64>, %b: tensor<32xf64>) -> tensor<32xf64> {\n %0 = stablehlo.subtract %a, %b : tensor<32xf64>\n return %0 : tensor<32xf64>\n }\n}", "notes": "subtract f64 32", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 76 |
+
{"id": "076_ew_bin_076_subtract_f64_64", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.subtract elementwise to two f64 tensors of shape 64.", "mlir": "module {\n func.func @f(%a: tensor<64xf64>, %b: tensor<64xf64>) -> tensor<64xf64> {\n %0 = stablehlo.subtract %a, %b : tensor<64xf64>\n return %0 : tensor<64xf64>\n }\n}", "notes": "subtract f64 64", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 77 |
+
{"id": "077_ew_bin_077_subtract_f64_4x4", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.subtract elementwise to two f64 tensors of shape 4x4.", "mlir": "module {\n func.func @f(%a: tensor<4x4xf64>, %b: tensor<4x4xf64>) -> tensor<4x4xf64> {\n %0 = stablehlo.subtract %a, %b : tensor<4x4xf64>\n return %0 : tensor<4x4xf64>\n }\n}", "notes": "subtract f64 4x4", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 78 |
+
{"id": "078_ew_bin_078_subtract_f64_8x8", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.subtract elementwise to two f64 tensors of shape 8x8.", "mlir": "module {\n func.func @f(%a: tensor<8x8xf64>, %b: tensor<8x8xf64>) -> tensor<8x8xf64> {\n %0 = stablehlo.subtract %a, %b : tensor<8x8xf64>\n return %0 : tensor<8x8xf64>\n }\n}", "notes": "subtract f64 8x8", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 79 |
+
{"id": "079_ew_bin_079_subtract_f64_8x16", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.subtract elementwise to two f64 tensors of shape 8x16.", "mlir": "module {\n func.func @f(%a: tensor<8x16xf64>, %b: tensor<8x16xf64>) -> tensor<8x16xf64> {\n %0 = stablehlo.subtract %a, %b : tensor<8x16xf64>\n return %0 : tensor<8x16xf64>\n }\n}", "notes": "subtract f64 8x16", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 80 |
+
{"id": "080_ew_bin_080_subtract_f64_4x4x4", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.subtract elementwise to two f64 tensors of shape 4x4x4.", "mlir": "module {\n func.func @f(%a: tensor<4x4x4xf64>, %b: tensor<4x4x4xf64>) -> tensor<4x4x4xf64> {\n %0 = stablehlo.subtract %a, %b : tensor<4x4x4xf64>\n return %0 : tensor<4x4x4xf64>\n }\n}", "notes": "subtract f64 4x4x4", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 81 |
+
{"id": "081_ew_bin_081_subtract_f64_2x8x8", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.subtract elementwise to two f64 tensors of shape 2x8x8.", "mlir": "module {\n func.func @f(%a: tensor<2x8x8xf64>, %b: tensor<2x8x8xf64>) -> tensor<2x8x8xf64> {\n %0 = stablehlo.subtract %a, %b : tensor<2x8x8xf64>\n return %0 : tensor<2x8x8xf64>\n }\n}", "notes": "subtract f64 2x8x8", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 82 |
+
{"id": "082_ew_bin_082_subtract_i8_8", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.subtract elementwise to two i8 tensors of shape 8.", "mlir": "module {\n func.func @f(%a: tensor<8xi8>, %b: tensor<8xi8>) -> tensor<8xi8> {\n %0 = stablehlo.subtract %a, %b : tensor<8xi8>\n return %0 : tensor<8xi8>\n }\n}", "notes": "subtract i8 8", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 83 |
+
{"id": "083_ew_bin_083_subtract_i8_16", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.subtract elementwise to two i8 tensors of shape 16.", "mlir": "module {\n func.func @f(%a: tensor<16xi8>, %b: tensor<16xi8>) -> tensor<16xi8> {\n %0 = stablehlo.subtract %a, %b : tensor<16xi8>\n return %0 : tensor<16xi8>\n }\n}", "notes": "subtract i8 16", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 84 |
+
{"id": "084_ew_bin_084_subtract_i8_32", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.subtract elementwise to two i8 tensors of shape 32.", "mlir": "module {\n func.func @f(%a: tensor<32xi8>, %b: tensor<32xi8>) -> tensor<32xi8> {\n %0 = stablehlo.subtract %a, %b : tensor<32xi8>\n return %0 : tensor<32xi8>\n }\n}", "notes": "subtract i8 32", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 85 |
+
{"id": "085_ew_bin_085_subtract_i8_64", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.subtract elementwise to two i8 tensors of shape 64.", "mlir": "module {\n func.func @f(%a: tensor<64xi8>, %b: tensor<64xi8>) -> tensor<64xi8> {\n %0 = stablehlo.subtract %a, %b : tensor<64xi8>\n return %0 : tensor<64xi8>\n }\n}", "notes": "subtract i8 64", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 86 |
+
{"id": "086_ew_bin_086_subtract_i8_4x4", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.subtract elementwise to two i8 tensors of shape 4x4.", "mlir": "module {\n func.func @f(%a: tensor<4x4xi8>, %b: tensor<4x4xi8>) -> tensor<4x4xi8> {\n %0 = stablehlo.subtract %a, %b : tensor<4x4xi8>\n return %0 : tensor<4x4xi8>\n }\n}", "notes": "subtract i8 4x4", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 87 |
+
{"id": "087_ew_bin_087_subtract_i8_8x8", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.subtract elementwise to two i8 tensors of shape 8x8.", "mlir": "module {\n func.func @f(%a: tensor<8x8xi8>, %b: tensor<8x8xi8>) -> tensor<8x8xi8> {\n %0 = stablehlo.subtract %a, %b : tensor<8x8xi8>\n return %0 : tensor<8x8xi8>\n }\n}", "notes": "subtract i8 8x8", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 88 |
+
{"id": "088_ew_bin_088_subtract_i8_8x16", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.subtract elementwise to two i8 tensors of shape 8x16.", "mlir": "module {\n func.func @f(%a: tensor<8x16xi8>, %b: tensor<8x16xi8>) -> tensor<8x16xi8> {\n %0 = stablehlo.subtract %a, %b : tensor<8x16xi8>\n return %0 : tensor<8x16xi8>\n }\n}", "notes": "subtract i8 8x16", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 89 |
+
{"id": "089_ew_bin_089_subtract_i8_4x4x4", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.subtract elementwise to two i8 tensors of shape 4x4x4.", "mlir": "module {\n func.func @f(%a: tensor<4x4x4xi8>, %b: tensor<4x4x4xi8>) -> tensor<4x4x4xi8> {\n %0 = stablehlo.subtract %a, %b : tensor<4x4x4xi8>\n return %0 : tensor<4x4x4xi8>\n }\n}", "notes": "subtract i8 4x4x4", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 90 |
+
{"id": "090_ew_bin_090_subtract_i8_2x8x8", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.subtract elementwise to two i8 tensors of shape 2x8x8.", "mlir": "module {\n func.func @f(%a: tensor<2x8x8xi8>, %b: tensor<2x8x8xi8>) -> tensor<2x8x8xi8> {\n %0 = stablehlo.subtract %a, %b : tensor<2x8x8xi8>\n return %0 : tensor<2x8x8xi8>\n }\n}", "notes": "subtract i8 2x8x8", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 91 |
+
{"id": "091_ew_bin_091_subtract_i32_8", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.subtract elementwise to two i32 tensors of shape 8.", "mlir": "module {\n func.func @f(%a: tensor<8xi32>, %b: tensor<8xi32>) -> tensor<8xi32> {\n %0 = stablehlo.subtract %a, %b : tensor<8xi32>\n return %0 : tensor<8xi32>\n }\n}", "notes": "subtract i32 8", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 92 |
+
{"id": "092_ew_bin_092_subtract_i32_16", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.subtract elementwise to two i32 tensors of shape 16.", "mlir": "module {\n func.func @f(%a: tensor<16xi32>, %b: tensor<16xi32>) -> tensor<16xi32> {\n %0 = stablehlo.subtract %a, %b : tensor<16xi32>\n return %0 : tensor<16xi32>\n }\n}", "notes": "subtract i32 16", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 93 |
+
{"id": "093_ew_bin_093_subtract_i32_32", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.subtract elementwise to two i32 tensors of shape 32.", "mlir": "module {\n func.func @f(%a: tensor<32xi32>, %b: tensor<32xi32>) -> tensor<32xi32> {\n %0 = stablehlo.subtract %a, %b : tensor<32xi32>\n return %0 : tensor<32xi32>\n }\n}", "notes": "subtract i32 32", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 94 |
+
{"id": "094_ew_bin_094_subtract_i32_64", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.subtract elementwise to two i32 tensors of shape 64.", "mlir": "module {\n func.func @f(%a: tensor<64xi32>, %b: tensor<64xi32>) -> tensor<64xi32> {\n %0 = stablehlo.subtract %a, %b : tensor<64xi32>\n return %0 : tensor<64xi32>\n }\n}", "notes": "subtract i32 64", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 95 |
+
{"id": "095_ew_bin_095_subtract_i32_4x4", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.subtract elementwise to two i32 tensors of shape 4x4.", "mlir": "module {\n func.func @f(%a: tensor<4x4xi32>, %b: tensor<4x4xi32>) -> tensor<4x4xi32> {\n %0 = stablehlo.subtract %a, %b : tensor<4x4xi32>\n return %0 : tensor<4x4xi32>\n }\n}", "notes": "subtract i32 4x4", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 96 |
+
{"id": "096_ew_bin_096_subtract_i32_8x8", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.subtract elementwise to two i32 tensors of shape 8x8.", "mlir": "module {\n func.func @f(%a: tensor<8x8xi32>, %b: tensor<8x8xi32>) -> tensor<8x8xi32> {\n %0 = stablehlo.subtract %a, %b : tensor<8x8xi32>\n return %0 : tensor<8x8xi32>\n }\n}", "notes": "subtract i32 8x8", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 97 |
+
{"id": "097_ew_bin_097_subtract_i32_8x16", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.subtract elementwise to two i32 tensors of shape 8x16.", "mlir": "module {\n func.func @f(%a: tensor<8x16xi32>, %b: tensor<8x16xi32>) -> tensor<8x16xi32> {\n %0 = stablehlo.subtract %a, %b : tensor<8x16xi32>\n return %0 : tensor<8x16xi32>\n }\n}", "notes": "subtract i32 8x16", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 98 |
+
{"id": "098_ew_bin_098_subtract_i32_4x4x4", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.subtract elementwise to two i32 tensors of shape 4x4x4.", "mlir": "module {\n func.func @f(%a: tensor<4x4x4xi32>, %b: tensor<4x4x4xi32>) -> tensor<4x4x4xi32> {\n %0 = stablehlo.subtract %a, %b : tensor<4x4x4xi32>\n return %0 : tensor<4x4x4xi32>\n }\n}", "notes": "subtract i32 4x4x4", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 99 |
+
{"id": "099_ew_bin_099_subtract_i32_2x8x8", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.subtract elementwise to two i32 tensors of shape 2x8x8.", "mlir": "module {\n func.func @f(%a: tensor<2x8x8xi32>, %b: tensor<2x8x8xi32>) -> tensor<2x8x8xi32> {\n %0 = stablehlo.subtract %a, %b : tensor<2x8x8xi32>\n return %0 : tensor<2x8x8xi32>\n }\n}", "notes": "subtract i32 2x8x8", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 100 |
+
{"id": "100_ew_bin_100_subtract_i64_8", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.subtract elementwise to two i64 tensors of shape 8.", "mlir": "module {\n func.func @f(%a: tensor<8xi64>, %b: tensor<8xi64>) -> tensor<8xi64> {\n %0 = stablehlo.subtract %a, %b : tensor<8xi64>\n return %0 : tensor<8xi64>\n }\n}", "notes": "subtract i64 8", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 101 |
+
{"id": "101_ew_bin_101_subtract_i64_16", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.subtract elementwise to two i64 tensors of shape 16.", "mlir": "module {\n func.func @f(%a: tensor<16xi64>, %b: tensor<16xi64>) -> tensor<16xi64> {\n %0 = stablehlo.subtract %a, %b : tensor<16xi64>\n return %0 : tensor<16xi64>\n }\n}", "notes": "subtract i64 16", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 102 |
+
{"id": "102_ew_bin_102_subtract_i64_32", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.subtract elementwise to two i64 tensors of shape 32.", "mlir": "module {\n func.func @f(%a: tensor<32xi64>, %b: tensor<32xi64>) -> tensor<32xi64> {\n %0 = stablehlo.subtract %a, %b : tensor<32xi64>\n return %0 : tensor<32xi64>\n }\n}", "notes": "subtract i64 32", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 103 |
+
{"id": "103_ew_bin_103_subtract_i64_64", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.subtract elementwise to two i64 tensors of shape 64.", "mlir": "module {\n func.func @f(%a: tensor<64xi64>, %b: tensor<64xi64>) -> tensor<64xi64> {\n %0 = stablehlo.subtract %a, %b : tensor<64xi64>\n return %0 : tensor<64xi64>\n }\n}", "notes": "subtract i64 64", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 104 |
+
{"id": "104_ew_bin_104_subtract_i64_4x4", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.subtract elementwise to two i64 tensors of shape 4x4.", "mlir": "module {\n func.func @f(%a: tensor<4x4xi64>, %b: tensor<4x4xi64>) -> tensor<4x4xi64> {\n %0 = stablehlo.subtract %a, %b : tensor<4x4xi64>\n return %0 : tensor<4x4xi64>\n }\n}", "notes": "subtract i64 4x4", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 105 |
+
{"id": "105_ew_bin_105_subtract_i64_8x8", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.subtract elementwise to two i64 tensors of shape 8x8.", "mlir": "module {\n func.func @f(%a: tensor<8x8xi64>, %b: tensor<8x8xi64>) -> tensor<8x8xi64> {\n %0 = stablehlo.subtract %a, %b : tensor<8x8xi64>\n return %0 : tensor<8x8xi64>\n }\n}", "notes": "subtract i64 8x8", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 106 |
+
{"id": "106_ew_bin_106_subtract_i64_8x16", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.subtract elementwise to two i64 tensors of shape 8x16.", "mlir": "module {\n func.func @f(%a: tensor<8x16xi64>, %b: tensor<8x16xi64>) -> tensor<8x16xi64> {\n %0 = stablehlo.subtract %a, %b : tensor<8x16xi64>\n return %0 : tensor<8x16xi64>\n }\n}", "notes": "subtract i64 8x16", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 107 |
+
{"id": "107_ew_bin_107_subtract_i64_4x4x4", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.subtract elementwise to two i64 tensors of shape 4x4x4.", "mlir": "module {\n func.func @f(%a: tensor<4x4x4xi64>, %b: tensor<4x4x4xi64>) -> tensor<4x4x4xi64> {\n %0 = stablehlo.subtract %a, %b : tensor<4x4x4xi64>\n return %0 : tensor<4x4x4xi64>\n }\n}", "notes": "subtract i64 4x4x4", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 108 |
+
{"id": "108_ew_bin_108_subtract_i64_2x8x8", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.subtract elementwise to two i64 tensors of shape 2x8x8.", "mlir": "module {\n func.func @f(%a: tensor<2x8x8xi64>, %b: tensor<2x8x8xi64>) -> tensor<2x8x8xi64> {\n %0 = stablehlo.subtract %a, %b : tensor<2x8x8xi64>\n return %0 : tensor<2x8x8xi64>\n }\n}", "notes": "subtract i64 2x8x8", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 109 |
+
{"id": "109_ew_bin_109_multiply_f16_8", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.multiply elementwise to two f16 tensors of shape 8.", "mlir": "module {\n func.func @f(%a: tensor<8xf16>, %b: tensor<8xf16>) -> tensor<8xf16> {\n %0 = stablehlo.multiply %a, %b : tensor<8xf16>\n return %0 : tensor<8xf16>\n }\n}", "notes": "multiply f16 8", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 110 |
+
{"id": "110_ew_bin_110_multiply_f16_16", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.multiply elementwise to two f16 tensors of shape 16.", "mlir": "module {\n func.func @f(%a: tensor<16xf16>, %b: tensor<16xf16>) -> tensor<16xf16> {\n %0 = stablehlo.multiply %a, %b : tensor<16xf16>\n return %0 : tensor<16xf16>\n }\n}", "notes": "multiply f16 16", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 111 |
+
{"id": "111_ew_bin_111_multiply_f16_32", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.multiply elementwise to two f16 tensors of shape 32.", "mlir": "module {\n func.func @f(%a: tensor<32xf16>, %b: tensor<32xf16>) -> tensor<32xf16> {\n %0 = stablehlo.multiply %a, %b : tensor<32xf16>\n return %0 : tensor<32xf16>\n }\n}", "notes": "multiply f16 32", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 112 |
+
{"id": "112_ew_bin_112_multiply_f16_64", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.multiply elementwise to two f16 tensors of shape 64.", "mlir": "module {\n func.func @f(%a: tensor<64xf16>, %b: tensor<64xf16>) -> tensor<64xf16> {\n %0 = stablehlo.multiply %a, %b : tensor<64xf16>\n return %0 : tensor<64xf16>\n }\n}", "notes": "multiply f16 64", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 113 |
+
{"id": "113_ew_bin_113_multiply_f16_4x4", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.multiply elementwise to two f16 tensors of shape 4x4.", "mlir": "module {\n func.func @f(%a: tensor<4x4xf16>, %b: tensor<4x4xf16>) -> tensor<4x4xf16> {\n %0 = stablehlo.multiply %a, %b : tensor<4x4xf16>\n return %0 : tensor<4x4xf16>\n }\n}", "notes": "multiply f16 4x4", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 114 |
+
{"id": "114_ew_bin_114_multiply_f16_8x8", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.multiply elementwise to two f16 tensors of shape 8x8.", "mlir": "module {\n func.func @f(%a: tensor<8x8xf16>, %b: tensor<8x8xf16>) -> tensor<8x8xf16> {\n %0 = stablehlo.multiply %a, %b : tensor<8x8xf16>\n return %0 : tensor<8x8xf16>\n }\n}", "notes": "multiply f16 8x8", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 115 |
+
{"id": "115_ew_bin_115_multiply_f16_8x16", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.multiply elementwise to two f16 tensors of shape 8x16.", "mlir": "module {\n func.func @f(%a: tensor<8x16xf16>, %b: tensor<8x16xf16>) -> tensor<8x16xf16> {\n %0 = stablehlo.multiply %a, %b : tensor<8x16xf16>\n return %0 : tensor<8x16xf16>\n }\n}", "notes": "multiply f16 8x16", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 116 |
+
{"id": "116_ew_bin_116_multiply_f16_4x4x4", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.multiply elementwise to two f16 tensors of shape 4x4x4.", "mlir": "module {\n func.func @f(%a: tensor<4x4x4xf16>, %b: tensor<4x4x4xf16>) -> tensor<4x4x4xf16> {\n %0 = stablehlo.multiply %a, %b : tensor<4x4x4xf16>\n return %0 : tensor<4x4x4xf16>\n }\n}", "notes": "multiply f16 4x4x4", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 117 |
+
{"id": "117_ew_bin_117_multiply_f16_2x8x8", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.multiply elementwise to two f16 tensors of shape 2x8x8.", "mlir": "module {\n func.func @f(%a: tensor<2x8x8xf16>, %b: tensor<2x8x8xf16>) -> tensor<2x8x8xf16> {\n %0 = stablehlo.multiply %a, %b : tensor<2x8x8xf16>\n return %0 : tensor<2x8x8xf16>\n }\n}", "notes": "multiply f16 2x8x8", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 118 |
+
{"id": "118_ew_bin_118_multiply_f32_8", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.multiply elementwise to two f32 tensors of shape 8.", "mlir": "module {\n func.func @f(%a: tensor<8xf32>, %b: tensor<8xf32>) -> tensor<8xf32> {\n %0 = stablehlo.multiply %a, %b : tensor<8xf32>\n return %0 : tensor<8xf32>\n }\n}", "notes": "multiply f32 8", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 119 |
+
{"id": "119_ew_bin_119_multiply_f32_16", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.multiply elementwise to two f32 tensors of shape 16.", "mlir": "module {\n func.func @f(%a: tensor<16xf32>, %b: tensor<16xf32>) -> tensor<16xf32> {\n %0 = stablehlo.multiply %a, %b : tensor<16xf32>\n return %0 : tensor<16xf32>\n }\n}", "notes": "multiply f32 16", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 120 |
+
{"id": "120_ew_bin_120_multiply_f32_32", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.multiply elementwise to two f32 tensors of shape 32.", "mlir": "module {\n func.func @f(%a: tensor<32xf32>, %b: tensor<32xf32>) -> tensor<32xf32> {\n %0 = stablehlo.multiply %a, %b : tensor<32xf32>\n return %0 : tensor<32xf32>\n }\n}", "notes": "multiply f32 32", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 121 |
+
{"id": "121_ew_bin_121_multiply_f32_64", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.multiply elementwise to two f32 tensors of shape 64.", "mlir": "module {\n func.func @f(%a: tensor<64xf32>, %b: tensor<64xf32>) -> tensor<64xf32> {\n %0 = stablehlo.multiply %a, %b : tensor<64xf32>\n return %0 : tensor<64xf32>\n }\n}", "notes": "multiply f32 64", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 122 |
+
{"id": "122_ew_bin_122_multiply_f32_4x4", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.multiply elementwise to two f32 tensors of shape 4x4.", "mlir": "module {\n func.func @f(%a: tensor<4x4xf32>, %b: tensor<4x4xf32>) -> tensor<4x4xf32> {\n %0 = stablehlo.multiply %a, %b : tensor<4x4xf32>\n return %0 : tensor<4x4xf32>\n }\n}", "notes": "multiply f32 4x4", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 123 |
+
{"id": "123_ew_bin_123_multiply_f32_8x8", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.multiply elementwise to two f32 tensors of shape 8x8.", "mlir": "module {\n func.func @f(%a: tensor<8x8xf32>, %b: tensor<8x8xf32>) -> tensor<8x8xf32> {\n %0 = stablehlo.multiply %a, %b : tensor<8x8xf32>\n return %0 : tensor<8x8xf32>\n }\n}", "notes": "multiply f32 8x8", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 124 |
+
{"id": "124_ew_bin_124_multiply_f32_8x16", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.multiply elementwise to two f32 tensors of shape 8x16.", "mlir": "module {\n func.func @f(%a: tensor<8x16xf32>, %b: tensor<8x16xf32>) -> tensor<8x16xf32> {\n %0 = stablehlo.multiply %a, %b : tensor<8x16xf32>\n return %0 : tensor<8x16xf32>\n }\n}", "notes": "multiply f32 8x16", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 125 |
+
{"id": "125_ew_bin_125_multiply_f32_4x4x4", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.multiply elementwise to two f32 tensors of shape 4x4x4.", "mlir": "module {\n func.func @f(%a: tensor<4x4x4xf32>, %b: tensor<4x4x4xf32>) -> tensor<4x4x4xf32> {\n %0 = stablehlo.multiply %a, %b : tensor<4x4x4xf32>\n return %0 : tensor<4x4x4xf32>\n }\n}", "notes": "multiply f32 4x4x4", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 126 |
+
{"id": "126_ew_bin_126_multiply_f32_2x8x8", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.multiply elementwise to two f32 tensors of shape 2x8x8.", "mlir": "module {\n func.func @f(%a: tensor<2x8x8xf32>, %b: tensor<2x8x8xf32>) -> tensor<2x8x8xf32> {\n %0 = stablehlo.multiply %a, %b : tensor<2x8x8xf32>\n return %0 : tensor<2x8x8xf32>\n }\n}", "notes": "multiply f32 2x8x8", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 127 |
+
{"id": "127_ew_bin_127_multiply_f64_8", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.multiply elementwise to two f64 tensors of shape 8.", "mlir": "module {\n func.func @f(%a: tensor<8xf64>, %b: tensor<8xf64>) -> tensor<8xf64> {\n %0 = stablehlo.multiply %a, %b : tensor<8xf64>\n return %0 : tensor<8xf64>\n }\n}", "notes": "multiply f64 8", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 128 |
+
{"id": "128_ew_bin_128_multiply_f64_16", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.multiply elementwise to two f64 tensors of shape 16.", "mlir": "module {\n func.func @f(%a: tensor<16xf64>, %b: tensor<16xf64>) -> tensor<16xf64> {\n %0 = stablehlo.multiply %a, %b : tensor<16xf64>\n return %0 : tensor<16xf64>\n }\n}", "notes": "multiply f64 16", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 129 |
+
{"id": "129_ew_bin_129_multiply_f64_32", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.multiply elementwise to two f64 tensors of shape 32.", "mlir": "module {\n func.func @f(%a: tensor<32xf64>, %b: tensor<32xf64>) -> tensor<32xf64> {\n %0 = stablehlo.multiply %a, %b : tensor<32xf64>\n return %0 : tensor<32xf64>\n }\n}", "notes": "multiply f64 32", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 130 |
+
{"id": "130_ew_bin_130_multiply_f64_64", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.multiply elementwise to two f64 tensors of shape 64.", "mlir": "module {\n func.func @f(%a: tensor<64xf64>, %b: tensor<64xf64>) -> tensor<64xf64> {\n %0 = stablehlo.multiply %a, %b : tensor<64xf64>\n return %0 : tensor<64xf64>\n }\n}", "notes": "multiply f64 64", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 131 |
+
{"id": "131_ew_bin_131_multiply_f64_4x4", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.multiply elementwise to two f64 tensors of shape 4x4.", "mlir": "module {\n func.func @f(%a: tensor<4x4xf64>, %b: tensor<4x4xf64>) -> tensor<4x4xf64> {\n %0 = stablehlo.multiply %a, %b : tensor<4x4xf64>\n return %0 : tensor<4x4xf64>\n }\n}", "notes": "multiply f64 4x4", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 132 |
+
{"id": "132_ew_bin_132_multiply_f64_8x8", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.multiply elementwise to two f64 tensors of shape 8x8.", "mlir": "module {\n func.func @f(%a: tensor<8x8xf64>, %b: tensor<8x8xf64>) -> tensor<8x8xf64> {\n %0 = stablehlo.multiply %a, %b : tensor<8x8xf64>\n return %0 : tensor<8x8xf64>\n }\n}", "notes": "multiply f64 8x8", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 133 |
+
{"id": "133_ew_bin_133_multiply_f64_8x16", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.multiply elementwise to two f64 tensors of shape 8x16.", "mlir": "module {\n func.func @f(%a: tensor<8x16xf64>, %b: tensor<8x16xf64>) -> tensor<8x16xf64> {\n %0 = stablehlo.multiply %a, %b : tensor<8x16xf64>\n return %0 : tensor<8x16xf64>\n }\n}", "notes": "multiply f64 8x16", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 134 |
+
{"id": "134_ew_bin_134_multiply_f64_4x4x4", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.multiply elementwise to two f64 tensors of shape 4x4x4.", "mlir": "module {\n func.func @f(%a: tensor<4x4x4xf64>, %b: tensor<4x4x4xf64>) -> tensor<4x4x4xf64> {\n %0 = stablehlo.multiply %a, %b : tensor<4x4x4xf64>\n return %0 : tensor<4x4x4xf64>\n }\n}", "notes": "multiply f64 4x4x4", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 135 |
+
{"id": "135_ew_bin_135_multiply_f64_2x8x8", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.multiply elementwise to two f64 tensors of shape 2x8x8.", "mlir": "module {\n func.func @f(%a: tensor<2x8x8xf64>, %b: tensor<2x8x8xf64>) -> tensor<2x8x8xf64> {\n %0 = stablehlo.multiply %a, %b : tensor<2x8x8xf64>\n return %0 : tensor<2x8x8xf64>\n }\n}", "notes": "multiply f64 2x8x8", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 136 |
+
{"id": "136_ew_bin_136_multiply_i8_8", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.multiply elementwise to two i8 tensors of shape 8.", "mlir": "module {\n func.func @f(%a: tensor<8xi8>, %b: tensor<8xi8>) -> tensor<8xi8> {\n %0 = stablehlo.multiply %a, %b : tensor<8xi8>\n return %0 : tensor<8xi8>\n }\n}", "notes": "multiply i8 8", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 137 |
+
{"id": "137_ew_bin_137_multiply_i8_16", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.multiply elementwise to two i8 tensors of shape 16.", "mlir": "module {\n func.func @f(%a: tensor<16xi8>, %b: tensor<16xi8>) -> tensor<16xi8> {\n %0 = stablehlo.multiply %a, %b : tensor<16xi8>\n return %0 : tensor<16xi8>\n }\n}", "notes": "multiply i8 16", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 138 |
+
{"id": "138_ew_bin_138_multiply_i8_32", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.multiply elementwise to two i8 tensors of shape 32.", "mlir": "module {\n func.func @f(%a: tensor<32xi8>, %b: tensor<32xi8>) -> tensor<32xi8> {\n %0 = stablehlo.multiply %a, %b : tensor<32xi8>\n return %0 : tensor<32xi8>\n }\n}", "notes": "multiply i8 32", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 139 |
+
{"id": "139_ew_bin_139_multiply_i8_64", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.multiply elementwise to two i8 tensors of shape 64.", "mlir": "module {\n func.func @f(%a: tensor<64xi8>, %b: tensor<64xi8>) -> tensor<64xi8> {\n %0 = stablehlo.multiply %a, %b : tensor<64xi8>\n return %0 : tensor<64xi8>\n }\n}", "notes": "multiply i8 64", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 140 |
+
{"id": "140_ew_bin_140_multiply_i8_4x4", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.multiply elementwise to two i8 tensors of shape 4x4.", "mlir": "module {\n func.func @f(%a: tensor<4x4xi8>, %b: tensor<4x4xi8>) -> tensor<4x4xi8> {\n %0 = stablehlo.multiply %a, %b : tensor<4x4xi8>\n return %0 : tensor<4x4xi8>\n }\n}", "notes": "multiply i8 4x4", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 141 |
+
{"id": "141_ew_bin_141_multiply_i8_8x8", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.multiply elementwise to two i8 tensors of shape 8x8.", "mlir": "module {\n func.func @f(%a: tensor<8x8xi8>, %b: tensor<8x8xi8>) -> tensor<8x8xi8> {\n %0 = stablehlo.multiply %a, %b : tensor<8x8xi8>\n return %0 : tensor<8x8xi8>\n }\n}", "notes": "multiply i8 8x8", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 142 |
+
{"id": "142_ew_bin_142_multiply_i8_8x16", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.multiply elementwise to two i8 tensors of shape 8x16.", "mlir": "module {\n func.func @f(%a: tensor<8x16xi8>, %b: tensor<8x16xi8>) -> tensor<8x16xi8> {\n %0 = stablehlo.multiply %a, %b : tensor<8x16xi8>\n return %0 : tensor<8x16xi8>\n }\n}", "notes": "multiply i8 8x16", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 143 |
+
{"id": "143_ew_bin_143_multiply_i8_4x4x4", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.multiply elementwise to two i8 tensors of shape 4x4x4.", "mlir": "module {\n func.func @f(%a: tensor<4x4x4xi8>, %b: tensor<4x4x4xi8>) -> tensor<4x4x4xi8> {\n %0 = stablehlo.multiply %a, %b : tensor<4x4x4xi8>\n return %0 : tensor<4x4x4xi8>\n }\n}", "notes": "multiply i8 4x4x4", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 144 |
+
{"id": "144_ew_bin_144_multiply_i8_2x8x8", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.multiply elementwise to two i8 tensors of shape 2x8x8.", "mlir": "module {\n func.func @f(%a: tensor<2x8x8xi8>, %b: tensor<2x8x8xi8>) -> tensor<2x8x8xi8> {\n %0 = stablehlo.multiply %a, %b : tensor<2x8x8xi8>\n return %0 : tensor<2x8x8xi8>\n }\n}", "notes": "multiply i8 2x8x8", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 145 |
+
{"id": "145_ew_bin_145_multiply_i32_8", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.multiply elementwise to two i32 tensors of shape 8.", "mlir": "module {\n func.func @f(%a: tensor<8xi32>, %b: tensor<8xi32>) -> tensor<8xi32> {\n %0 = stablehlo.multiply %a, %b : tensor<8xi32>\n return %0 : tensor<8xi32>\n }\n}", "notes": "multiply i32 8", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 146 |
+
{"id": "146_ew_bin_146_multiply_i32_16", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.multiply elementwise to two i32 tensors of shape 16.", "mlir": "module {\n func.func @f(%a: tensor<16xi32>, %b: tensor<16xi32>) -> tensor<16xi32> {\n %0 = stablehlo.multiply %a, %b : tensor<16xi32>\n return %0 : tensor<16xi32>\n }\n}", "notes": "multiply i32 16", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 147 |
+
{"id": "147_ew_bin_147_multiply_i32_32", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.multiply elementwise to two i32 tensors of shape 32.", "mlir": "module {\n func.func @f(%a: tensor<32xi32>, %b: tensor<32xi32>) -> tensor<32xi32> {\n %0 = stablehlo.multiply %a, %b : tensor<32xi32>\n return %0 : tensor<32xi32>\n }\n}", "notes": "multiply i32 32", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 148 |
+
{"id": "148_ew_bin_148_multiply_i32_64", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.multiply elementwise to two i32 tensors of shape 64.", "mlir": "module {\n func.func @f(%a: tensor<64xi32>, %b: tensor<64xi32>) -> tensor<64xi32> {\n %0 = stablehlo.multiply %a, %b : tensor<64xi32>\n return %0 : tensor<64xi32>\n }\n}", "notes": "multiply i32 64", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 149 |
+
{"id": "149_ew_bin_149_multiply_i32_4x4", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.multiply elementwise to two i32 tensors of shape 4x4.", "mlir": "module {\n func.func @f(%a: tensor<4x4xi32>, %b: tensor<4x4xi32>) -> tensor<4x4xi32> {\n %0 = stablehlo.multiply %a, %b : tensor<4x4xi32>\n return %0 : tensor<4x4xi32>\n }\n}", "notes": "multiply i32 4x4", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 150 |
+
{"id": "150_ew_bin_150_multiply_i32_8x8", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.multiply elementwise to two i32 tensors of shape 8x8.", "mlir": "module {\n func.func @f(%a: tensor<8x8xi32>, %b: tensor<8x8xi32>) -> tensor<8x8xi32> {\n %0 = stablehlo.multiply %a, %b : tensor<8x8xi32>\n return %0 : tensor<8x8xi32>\n }\n}", "notes": "multiply i32 8x8", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 151 |
+
{"id": "151_ew_bin_151_multiply_i32_8x16", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.multiply elementwise to two i32 tensors of shape 8x16.", "mlir": "module {\n func.func @f(%a: tensor<8x16xi32>, %b: tensor<8x16xi32>) -> tensor<8x16xi32> {\n %0 = stablehlo.multiply %a, %b : tensor<8x16xi32>\n return %0 : tensor<8x16xi32>\n }\n}", "notes": "multiply i32 8x16", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 152 |
+
{"id": "152_ew_bin_152_multiply_i32_4x4x4", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.multiply elementwise to two i32 tensors of shape 4x4x4.", "mlir": "module {\n func.func @f(%a: tensor<4x4x4xi32>, %b: tensor<4x4x4xi32>) -> tensor<4x4x4xi32> {\n %0 = stablehlo.multiply %a, %b : tensor<4x4x4xi32>\n return %0 : tensor<4x4x4xi32>\n }\n}", "notes": "multiply i32 4x4x4", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 153 |
+
{"id": "153_ew_bin_153_multiply_i32_2x8x8", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.multiply elementwise to two i32 tensors of shape 2x8x8.", "mlir": "module {\n func.func @f(%a: tensor<2x8x8xi32>, %b: tensor<2x8x8xi32>) -> tensor<2x8x8xi32> {\n %0 = stablehlo.multiply %a, %b : tensor<2x8x8xi32>\n return %0 : tensor<2x8x8xi32>\n }\n}", "notes": "multiply i32 2x8x8", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 154 |
+
{"id": "154_ew_bin_154_multiply_i64_8", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.multiply elementwise to two i64 tensors of shape 8.", "mlir": "module {\n func.func @f(%a: tensor<8xi64>, %b: tensor<8xi64>) -> tensor<8xi64> {\n %0 = stablehlo.multiply %a, %b : tensor<8xi64>\n return %0 : tensor<8xi64>\n }\n}", "notes": "multiply i64 8", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 155 |
+
{"id": "155_ew_bin_155_multiply_i64_16", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.multiply elementwise to two i64 tensors of shape 16.", "mlir": "module {\n func.func @f(%a: tensor<16xi64>, %b: tensor<16xi64>) -> tensor<16xi64> {\n %0 = stablehlo.multiply %a, %b : tensor<16xi64>\n return %0 : tensor<16xi64>\n }\n}", "notes": "multiply i64 16", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 156 |
+
{"id": "156_ew_bin_156_multiply_i64_32", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.multiply elementwise to two i64 tensors of shape 32.", "mlir": "module {\n func.func @f(%a: tensor<32xi64>, %b: tensor<32xi64>) -> tensor<32xi64> {\n %0 = stablehlo.multiply %a, %b : tensor<32xi64>\n return %0 : tensor<32xi64>\n }\n}", "notes": "multiply i64 32", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 157 |
+
{"id": "157_ew_bin_157_multiply_i64_64", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.multiply elementwise to two i64 tensors of shape 64.", "mlir": "module {\n func.func @f(%a: tensor<64xi64>, %b: tensor<64xi64>) -> tensor<64xi64> {\n %0 = stablehlo.multiply %a, %b : tensor<64xi64>\n return %0 : tensor<64xi64>\n }\n}", "notes": "multiply i64 64", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 158 |
+
{"id": "158_ew_bin_158_multiply_i64_4x4", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.multiply elementwise to two i64 tensors of shape 4x4.", "mlir": "module {\n func.func @f(%a: tensor<4x4xi64>, %b: tensor<4x4xi64>) -> tensor<4x4xi64> {\n %0 = stablehlo.multiply %a, %b : tensor<4x4xi64>\n return %0 : tensor<4x4xi64>\n }\n}", "notes": "multiply i64 4x4", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 159 |
+
{"id": "159_ew_bin_159_multiply_i64_8x8", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.multiply elementwise to two i64 tensors of shape 8x8.", "mlir": "module {\n func.func @f(%a: tensor<8x8xi64>, %b: tensor<8x8xi64>) -> tensor<8x8xi64> {\n %0 = stablehlo.multiply %a, %b : tensor<8x8xi64>\n return %0 : tensor<8x8xi64>\n }\n}", "notes": "multiply i64 8x8", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 160 |
+
{"id": "160_ew_bin_160_multiply_i64_8x16", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.multiply elementwise to two i64 tensors of shape 8x16.", "mlir": "module {\n func.func @f(%a: tensor<8x16xi64>, %b: tensor<8x16xi64>) -> tensor<8x16xi64> {\n %0 = stablehlo.multiply %a, %b : tensor<8x16xi64>\n return %0 : tensor<8x16xi64>\n }\n}", "notes": "multiply i64 8x16", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 161 |
+
{"id": "161_ew_bin_161_multiply_i64_4x4x4", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.multiply elementwise to two i64 tensors of shape 4x4x4.", "mlir": "module {\n func.func @f(%a: tensor<4x4x4xi64>, %b: tensor<4x4x4xi64>) -> tensor<4x4x4xi64> {\n %0 = stablehlo.multiply %a, %b : tensor<4x4x4xi64>\n return %0 : tensor<4x4x4xi64>\n }\n}", "notes": "multiply i64 4x4x4", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 162 |
+
{"id": "162_ew_bin_162_multiply_i64_2x8x8", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.multiply elementwise to two i64 tensors of shape 2x8x8.", "mlir": "module {\n func.func @f(%a: tensor<2x8x8xi64>, %b: tensor<2x8x8xi64>) -> tensor<2x8x8xi64> {\n %0 = stablehlo.multiply %a, %b : tensor<2x8x8xi64>\n return %0 : tensor<2x8x8xi64>\n }\n}", "notes": "multiply i64 2x8x8", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 163 |
+
{"id": "163_ew_bin_163_divide_f32_8", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.divide elementwise to two f32 tensors of shape 8.", "mlir": "module {\n func.func @f(%a: tensor<8xf32>, %b: tensor<8xf32>) -> tensor<8xf32> {\n %0 = stablehlo.divide %a, %b : tensor<8xf32>\n return %0 : tensor<8xf32>\n }\n}", "notes": "divide f32 8", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 164 |
+
{"id": "164_ew_bin_164_divide_f32_16", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.divide elementwise to two f32 tensors of shape 16.", "mlir": "module {\n func.func @f(%a: tensor<16xf32>, %b: tensor<16xf32>) -> tensor<16xf32> {\n %0 = stablehlo.divide %a, %b : tensor<16xf32>\n return %0 : tensor<16xf32>\n }\n}", "notes": "divide f32 16", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 165 |
+
{"id": "165_ew_bin_165_divide_f32_32", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.divide elementwise to two f32 tensors of shape 32.", "mlir": "module {\n func.func @f(%a: tensor<32xf32>, %b: tensor<32xf32>) -> tensor<32xf32> {\n %0 = stablehlo.divide %a, %b : tensor<32xf32>\n return %0 : tensor<32xf32>\n }\n}", "notes": "divide f32 32", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 166 |
+
{"id": "166_ew_bin_166_divide_f32_64", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.divide elementwise to two f32 tensors of shape 64.", "mlir": "module {\n func.func @f(%a: tensor<64xf32>, %b: tensor<64xf32>) -> tensor<64xf32> {\n %0 = stablehlo.divide %a, %b : tensor<64xf32>\n return %0 : tensor<64xf32>\n }\n}", "notes": "divide f32 64", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 167 |
+
{"id": "167_ew_bin_167_divide_f32_4x4", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.divide elementwise to two f32 tensors of shape 4x4.", "mlir": "module {\n func.func @f(%a: tensor<4x4xf32>, %b: tensor<4x4xf32>) -> tensor<4x4xf32> {\n %0 = stablehlo.divide %a, %b : tensor<4x4xf32>\n return %0 : tensor<4x4xf32>\n }\n}", "notes": "divide f32 4x4", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 168 |
+
{"id": "168_ew_bin_168_divide_f32_8x8", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.divide elementwise to two f32 tensors of shape 8x8.", "mlir": "module {\n func.func @f(%a: tensor<8x8xf32>, %b: tensor<8x8xf32>) -> tensor<8x8xf32> {\n %0 = stablehlo.divide %a, %b : tensor<8x8xf32>\n return %0 : tensor<8x8xf32>\n }\n}", "notes": "divide f32 8x8", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 169 |
+
{"id": "169_ew_bin_169_divide_f32_8x16", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.divide elementwise to two f32 tensors of shape 8x16.", "mlir": "module {\n func.func @f(%a: tensor<8x16xf32>, %b: tensor<8x16xf32>) -> tensor<8x16xf32> {\n %0 = stablehlo.divide %a, %b : tensor<8x16xf32>\n return %0 : tensor<8x16xf32>\n }\n}", "notes": "divide f32 8x16", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 170 |
+
{"id": "170_ew_bin_170_divide_f32_4x4x4", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.divide elementwise to two f32 tensors of shape 4x4x4.", "mlir": "module {\n func.func @f(%a: tensor<4x4x4xf32>, %b: tensor<4x4x4xf32>) -> tensor<4x4x4xf32> {\n %0 = stablehlo.divide %a, %b : tensor<4x4x4xf32>\n return %0 : tensor<4x4x4xf32>\n }\n}", "notes": "divide f32 4x4x4", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 171 |
+
{"id": "171_ew_bin_171_divide_f32_2x8x8", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.divide elementwise to two f32 tensors of shape 2x8x8.", "mlir": "module {\n func.func @f(%a: tensor<2x8x8xf32>, %b: tensor<2x8x8xf32>) -> tensor<2x8x8xf32> {\n %0 = stablehlo.divide %a, %b : tensor<2x8x8xf32>\n return %0 : tensor<2x8x8xf32>\n }\n}", "notes": "divide f32 2x8x8", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 172 |
+
{"id": "172_ew_bin_172_divide_f16_8", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.divide elementwise to two f16 tensors of shape 8.", "mlir": "module {\n func.func @f(%a: tensor<8xf16>, %b: tensor<8xf16>) -> tensor<8xf16> {\n %0 = stablehlo.divide %a, %b : tensor<8xf16>\n return %0 : tensor<8xf16>\n }\n}", "notes": "divide f16 8", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 173 |
+
{"id": "173_ew_bin_173_divide_f16_16", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.divide elementwise to two f16 tensors of shape 16.", "mlir": "module {\n func.func @f(%a: tensor<16xf16>, %b: tensor<16xf16>) -> tensor<16xf16> {\n %0 = stablehlo.divide %a, %b : tensor<16xf16>\n return %0 : tensor<16xf16>\n }\n}", "notes": "divide f16 16", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 174 |
+
{"id": "174_ew_bin_174_divide_f16_32", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.divide elementwise to two f16 tensors of shape 32.", "mlir": "module {\n func.func @f(%a: tensor<32xf16>, %b: tensor<32xf16>) -> tensor<32xf16> {\n %0 = stablehlo.divide %a, %b : tensor<32xf16>\n return %0 : tensor<32xf16>\n }\n}", "notes": "divide f16 32", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 175 |
+
{"id": "175_ew_bin_175_divide_f16_64", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.divide elementwise to two f16 tensors of shape 64.", "mlir": "module {\n func.func @f(%a: tensor<64xf16>, %b: tensor<64xf16>) -> tensor<64xf16> {\n %0 = stablehlo.divide %a, %b : tensor<64xf16>\n return %0 : tensor<64xf16>\n }\n}", "notes": "divide f16 64", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 176 |
+
{"id": "176_ew_bin_176_divide_f16_4x4", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.divide elementwise to two f16 tensors of shape 4x4.", "mlir": "module {\n func.func @f(%a: tensor<4x4xf16>, %b: tensor<4x4xf16>) -> tensor<4x4xf16> {\n %0 = stablehlo.divide %a, %b : tensor<4x4xf16>\n return %0 : tensor<4x4xf16>\n }\n}", "notes": "divide f16 4x4", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 177 |
+
{"id": "177_ew_bin_177_divide_f16_8x8", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.divide elementwise to two f16 tensors of shape 8x8.", "mlir": "module {\n func.func @f(%a: tensor<8x8xf16>, %b: tensor<8x8xf16>) -> tensor<8x8xf16> {\n %0 = stablehlo.divide %a, %b : tensor<8x8xf16>\n return %0 : tensor<8x8xf16>\n }\n}", "notes": "divide f16 8x8", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 178 |
+
{"id": "178_ew_bin_178_divide_f16_8x16", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.divide elementwise to two f16 tensors of shape 8x16.", "mlir": "module {\n func.func @f(%a: tensor<8x16xf16>, %b: tensor<8x16xf16>) -> tensor<8x16xf16> {\n %0 = stablehlo.divide %a, %b : tensor<8x16xf16>\n return %0 : tensor<8x16xf16>\n }\n}", "notes": "divide f16 8x16", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 179 |
+
{"id": "179_ew_bin_179_divide_f16_4x4x4", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.divide elementwise to two f16 tensors of shape 4x4x4.", "mlir": "module {\n func.func @f(%a: tensor<4x4x4xf16>, %b: tensor<4x4x4xf16>) -> tensor<4x4x4xf16> {\n %0 = stablehlo.divide %a, %b : tensor<4x4x4xf16>\n return %0 : tensor<4x4x4xf16>\n }\n}", "notes": "divide f16 4x4x4", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 180 |
+
{"id": "180_ew_bin_180_divide_f16_2x8x8", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.divide elementwise to two f16 tensors of shape 2x8x8.", "mlir": "module {\n func.func @f(%a: tensor<2x8x8xf16>, %b: tensor<2x8x8xf16>) -> tensor<2x8x8xf16> {\n %0 = stablehlo.divide %a, %b : tensor<2x8x8xf16>\n return %0 : tensor<2x8x8xf16>\n }\n}", "notes": "divide f16 2x8x8", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 181 |
+
{"id": "181_ew_bin_181_divide_f64_8", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.divide elementwise to two f64 tensors of shape 8.", "mlir": "module {\n func.func @f(%a: tensor<8xf64>, %b: tensor<8xf64>) -> tensor<8xf64> {\n %0 = stablehlo.divide %a, %b : tensor<8xf64>\n return %0 : tensor<8xf64>\n }\n}", "notes": "divide f64 8", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 182 |
+
{"id": "182_ew_bin_182_divide_f64_16", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.divide elementwise to two f64 tensors of shape 16.", "mlir": "module {\n func.func @f(%a: tensor<16xf64>, %b: tensor<16xf64>) -> tensor<16xf64> {\n %0 = stablehlo.divide %a, %b : tensor<16xf64>\n return %0 : tensor<16xf64>\n }\n}", "notes": "divide f64 16", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 183 |
+
{"id": "183_ew_bin_183_divide_f64_32", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.divide elementwise to two f64 tensors of shape 32.", "mlir": "module {\n func.func @f(%a: tensor<32xf64>, %b: tensor<32xf64>) -> tensor<32xf64> {\n %0 = stablehlo.divide %a, %b : tensor<32xf64>\n return %0 : tensor<32xf64>\n }\n}", "notes": "divide f64 32", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 184 |
+
{"id": "184_ew_bin_184_divide_f64_64", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.divide elementwise to two f64 tensors of shape 64.", "mlir": "module {\n func.func @f(%a: tensor<64xf64>, %b: tensor<64xf64>) -> tensor<64xf64> {\n %0 = stablehlo.divide %a, %b : tensor<64xf64>\n return %0 : tensor<64xf64>\n }\n}", "notes": "divide f64 64", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 185 |
+
{"id": "185_ew_bin_185_divide_f64_4x4", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.divide elementwise to two f64 tensors of shape 4x4.", "mlir": "module {\n func.func @f(%a: tensor<4x4xf64>, %b: tensor<4x4xf64>) -> tensor<4x4xf64> {\n %0 = stablehlo.divide %a, %b : tensor<4x4xf64>\n return %0 : tensor<4x4xf64>\n }\n}", "notes": "divide f64 4x4", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 186 |
+
{"id": "186_ew_bin_186_divide_f64_8x8", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.divide elementwise to two f64 tensors of shape 8x8.", "mlir": "module {\n func.func @f(%a: tensor<8x8xf64>, %b: tensor<8x8xf64>) -> tensor<8x8xf64> {\n %0 = stablehlo.divide %a, %b : tensor<8x8xf64>\n return %0 : tensor<8x8xf64>\n }\n}", "notes": "divide f64 8x8", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 187 |
+
{"id": "187_ew_bin_187_divide_f64_8x16", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.divide elementwise to two f64 tensors of shape 8x16.", "mlir": "module {\n func.func @f(%a: tensor<8x16xf64>, %b: tensor<8x16xf64>) -> tensor<8x16xf64> {\n %0 = stablehlo.divide %a, %b : tensor<8x16xf64>\n return %0 : tensor<8x16xf64>\n }\n}", "notes": "divide f64 8x16", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 188 |
+
{"id": "188_ew_bin_188_divide_f64_4x4x4", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.divide elementwise to two f64 tensors of shape 4x4x4.", "mlir": "module {\n func.func @f(%a: tensor<4x4x4xf64>, %b: tensor<4x4x4xf64>) -> tensor<4x4x4xf64> {\n %0 = stablehlo.divide %a, %b : tensor<4x4x4xf64>\n return %0 : tensor<4x4x4xf64>\n }\n}", "notes": "divide f64 4x4x4", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 189 |
+
{"id": "189_ew_bin_189_divide_f64_2x8x8", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.divide elementwise to two f64 tensors of shape 2x8x8.", "mlir": "module {\n func.func @f(%a: tensor<2x8x8xf64>, %b: tensor<2x8x8xf64>) -> tensor<2x8x8xf64> {\n %0 = stablehlo.divide %a, %b : tensor<2x8x8xf64>\n return %0 : tensor<2x8x8xf64>\n }\n}", "notes": "divide f64 2x8x8", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 190 |
+
{"id": "190_ew_bin_190_maximum_f16_8", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.maximum elementwise to two f16 tensors of shape 8.", "mlir": "module {\n func.func @f(%a: tensor<8xf16>, %b: tensor<8xf16>) -> tensor<8xf16> {\n %0 = stablehlo.maximum %a, %b : tensor<8xf16>\n return %0 : tensor<8xf16>\n }\n}", "notes": "maximum f16 8", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 191 |
+
{"id": "191_ew_bin_191_maximum_f16_16", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.maximum elementwise to two f16 tensors of shape 16.", "mlir": "module {\n func.func @f(%a: tensor<16xf16>, %b: tensor<16xf16>) -> tensor<16xf16> {\n %0 = stablehlo.maximum %a, %b : tensor<16xf16>\n return %0 : tensor<16xf16>\n }\n}", "notes": "maximum f16 16", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 192 |
+
{"id": "192_ew_bin_192_maximum_f16_32", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.maximum elementwise to two f16 tensors of shape 32.", "mlir": "module {\n func.func @f(%a: tensor<32xf16>, %b: tensor<32xf16>) -> tensor<32xf16> {\n %0 = stablehlo.maximum %a, %b : tensor<32xf16>\n return %0 : tensor<32xf16>\n }\n}", "notes": "maximum f16 32", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 193 |
+
{"id": "193_ew_bin_193_maximum_f16_64", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.maximum elementwise to two f16 tensors of shape 64.", "mlir": "module {\n func.func @f(%a: tensor<64xf16>, %b: tensor<64xf16>) -> tensor<64xf16> {\n %0 = stablehlo.maximum %a, %b : tensor<64xf16>\n return %0 : tensor<64xf16>\n }\n}", "notes": "maximum f16 64", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 194 |
+
{"id": "194_ew_bin_194_maximum_f16_4x4", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.maximum elementwise to two f16 tensors of shape 4x4.", "mlir": "module {\n func.func @f(%a: tensor<4x4xf16>, %b: tensor<4x4xf16>) -> tensor<4x4xf16> {\n %0 = stablehlo.maximum %a, %b : tensor<4x4xf16>\n return %0 : tensor<4x4xf16>\n }\n}", "notes": "maximum f16 4x4", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 195 |
+
{"id": "195_ew_bin_195_maximum_f16_8x8", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.maximum elementwise to two f16 tensors of shape 8x8.", "mlir": "module {\n func.func @f(%a: tensor<8x8xf16>, %b: tensor<8x8xf16>) -> tensor<8x8xf16> {\n %0 = stablehlo.maximum %a, %b : tensor<8x8xf16>\n return %0 : tensor<8x8xf16>\n }\n}", "notes": "maximum f16 8x8", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 196 |
+
{"id": "196_ew_bin_196_maximum_f16_8x16", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.maximum elementwise to two f16 tensors of shape 8x16.", "mlir": "module {\n func.func @f(%a: tensor<8x16xf16>, %b: tensor<8x16xf16>) -> tensor<8x16xf16> {\n %0 = stablehlo.maximum %a, %b : tensor<8x16xf16>\n return %0 : tensor<8x16xf16>\n }\n}", "notes": "maximum f16 8x16", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 197 |
+
{"id": "197_ew_bin_197_maximum_f16_4x4x4", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.maximum elementwise to two f16 tensors of shape 4x4x4.", "mlir": "module {\n func.func @f(%a: tensor<4x4x4xf16>, %b: tensor<4x4x4xf16>) -> tensor<4x4x4xf16> {\n %0 = stablehlo.maximum %a, %b : tensor<4x4x4xf16>\n return %0 : tensor<4x4x4xf16>\n }\n}", "notes": "maximum f16 4x4x4", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 198 |
+
{"id": "198_ew_bin_198_maximum_f16_2x8x8", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.maximum elementwise to two f16 tensors of shape 2x8x8.", "mlir": "module {\n func.func @f(%a: tensor<2x8x8xf16>, %b: tensor<2x8x8xf16>) -> tensor<2x8x8xf16> {\n %0 = stablehlo.maximum %a, %b : tensor<2x8x8xf16>\n return %0 : tensor<2x8x8xf16>\n }\n}", "notes": "maximum f16 2x8x8", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 199 |
+
{"id": "199_ew_bin_199_maximum_f32_8", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.maximum elementwise to two f32 tensors of shape 8.", "mlir": "module {\n func.func @f(%a: tensor<8xf32>, %b: tensor<8xf32>) -> tensor<8xf32> {\n %0 = stablehlo.maximum %a, %b : tensor<8xf32>\n return %0 : tensor<8xf32>\n }\n}", "notes": "maximum f32 8", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|
| 200 |
+
{"id": "200_ew_bin_200_maximum_f32_16", "difficulty": "programmatic-wide", "nl": "Write a function that applies stablehlo.maximum elementwise to two f32 tensors of shape 16.", "mlir": "module {\n func.func @f(%a: tensor<16xf32>, %b: tensor<16xf32>) -> tensor<16xf32> {\n %0 = stablehlo.maximum %a, %b : tensor<16xf32>\n return %0 : tensor<16xf32>\n }\n}", "notes": "maximum f32 16", "dialect": "stablehlo+func", "source": "day_f8_wide_sweep"}
|