{"id": "01_add-1d", "difficulty": "easy", "nl": "Write a function that adds two 1-D f32 tensors of 16 elements using stablehlo.add.", "mlir": "module {\n func.func @a(%a: tensor<16xf32>, %b: tensor<16xf32>) -> tensor<16xf32> {\n %0 = stablehlo.add %a, %b : tensor<16xf32>\n return %0 : tensor<16xf32>\n }\n}", "notes": "canonical stablehlo.add", "dialect": "stablehlo+func"} {"id": "02_add-2d-dynamic", "difficulty": "easy", "nl": "Write a function that adds two 2-D f32 tensors with dynamic shapes and returns the result.", "mlir": "module {\n func.func @add2d(%a: tensor, %b: tensor) -> tensor {\n %0 = stablehlo.add %a, %b : tensor\n return %0 : tensor\n }\n}", "notes": "dynamic-shape addition", "dialect": "stablehlo+func"} {"id": "03_subtract-1d-i32", "difficulty": "easy", "nl": "Write a function that subtracts two 1-D i32 tensors elementwise.", "mlir": "module {\n func.func @sub(%a: tensor<8xi32>, %b: tensor<8xi32>) -> tensor<8xi32> {\n %0 = stablehlo.subtract %a, %b : tensor<8xi32>\n return %0 : tensor<8xi32>\n }\n}", "notes": "integer subtraction", "dialect": "stablehlo+func"} {"id": "04_multiply-2d", "difficulty": "easy", "nl": "Write a function that multiplies two 4x4 f32 tensors elementwise using stablehlo.multiply.", "mlir": "module {\n func.func @mul(%a: tensor<4x4xf32>, %b: tensor<4x4xf32>) -> tensor<4x4xf32> {\n %0 = stablehlo.multiply %a, %b : tensor<4x4xf32>\n return %0 : tensor<4x4xf32>\n }\n}", "notes": "static 4x4 multiply", "dialect": "stablehlo+func"} {"id": "05_divide-f64", "difficulty": "easy", "nl": "Write a function that divides two 1-D f64 tensors of 32 elements using stablehlo.divide.", "mlir": "module {\n func.func @div(%a: tensor<32xf64>, %b: tensor<32xf64>) -> tensor<32xf64> {\n %0 = stablehlo.divide %a, %b : tensor<32xf64>\n return %0 : tensor<32xf64>\n }\n}", "notes": "f64 division", "dialect": "stablehlo+func"} {"id": "06_abs-f32", "difficulty": "easy", "nl": "Write a function that computes the elementwise absolute value of a 1-D f32 tensor.", "mlir": "module {\n func.func @ab(%a: tensor<16xf32>) -> tensor<16xf32> {\n %0 = stablehlo.abs %a : tensor<16xf32>\n return %0 : tensor<16xf32>\n }\n}", "notes": "abs", "dialect": "stablehlo+func"} {"id": "07_exp-1d", "difficulty": "easy", "nl": "Write a function that computes the elementwise exponential of a 1-D f32 tensor of 10 elements.", "mlir": "module {\n func.func @ex(%a: tensor<10xf32>) -> tensor<10xf32> {\n %0 = stablehlo.exponential %a : tensor<10xf32>\n return %0 : tensor<10xf32>\n }\n}", "notes": "exp", "dialect": "stablehlo+func"} {"id": "08_abs-dynamic", "difficulty": "medium", "nl": "Write a function that computes the elementwise absolute value of a dynamic-shape 2-D f32 tensor.", "mlir": "module {\n func.func @abd(%a: tensor) -> tensor {\n %0 = stablehlo.abs %a : tensor\n return %0 : tensor\n }\n}", "notes": "dynamic abs", "dialect": "stablehlo+func"} {"id": "09_transpose-2d", "difficulty": "medium", "nl": "Write a function that transposes a 4x8 f32 tensor producing an 8x4 tensor.", "mlir": "module {\n func.func @t(%a: tensor<4x8xf32>) -> tensor<8x4xf32> {\n %0 = stablehlo.transpose %a, dims = [1, 0] : (tensor<4x8xf32>) -> tensor<8x4xf32>\n return %0 : tensor<8x4xf32>\n }\n}", "notes": "transpose 2D", "dialect": "stablehlo+func"} {"id": "10_transpose-3d", "difficulty": "medium", "nl": "Write a function that transposes a 2x3x4 f32 tensor with permutation [2, 0, 1] producing a 4x2x3 tensor.", "mlir": "module {\n func.func @t3(%a: tensor<2x3x4xf32>) -> tensor<4x2x3xf32> {\n %0 = stablehlo.transpose %a, dims = [2, 0, 1] : (tensor<2x3x4xf32>) -> tensor<4x2x3xf32>\n return %0 : tensor<4x2x3xf32>\n }\n}", "notes": "3D transpose", "dialect": "stablehlo+func"} {"id": "11_transpose-square", "difficulty": "easy", "nl": "Write a function that transposes a 3x3 f32 tensor.", "mlir": "module {\n func.func @t(%a: tensor<3x3xf32>) -> tensor<3x3xf32> {\n %0 = stablehlo.transpose %a, dims = [1, 0] : (tensor<3x3xf32>) -> tensor<3x3xf32>\n return %0 : tensor<3x3xf32>\n }\n}", "notes": "square transpose", "dialect": "stablehlo+func"} {"id": "12_broadcast-1d-to-2d", "difficulty": "medium", "nl": "Write a function that broadcasts a 1-D f32 tensor of 8 elements to a 4x8 2-D tensor along dimension 1.", "mlir": "module {\n func.func @b(%a: tensor<8xf32>) -> tensor<4x8xf32> {\n %0 = stablehlo.broadcast_in_dim %a, dims = [1] : (tensor<8xf32>) -> tensor<4x8xf32>\n return %0 : tensor<4x8xf32>\n }\n}", "notes": "broadcast 1D to 2D", "dialect": "stablehlo+func"} {"id": "13_broadcast-scalar-to-vector", "difficulty": "medium", "nl": "Write a function that broadcasts a scalar f32 (shape [1]) to a 1-D f32 tensor of 16 elements.", "mlir": "module {\n func.func @bs(%a: tensor<1xf32>) -> tensor<16xf32> {\n %0 = stablehlo.broadcast_in_dim %a, dims = [0] : (tensor<1xf32>) -> tensor<16xf32>\n return %0 : tensor<16xf32>\n }\n}", "notes": "scalar broadcast", "dialect": "stablehlo+func"} {"id": "14_reshape-flatten", "difficulty": "medium", "nl": "Write a function that flattens a 4x8 f32 tensor into a 1-D tensor of 32 elements.", "mlir": "module {\n func.func @r(%a: tensor<4x8xf32>) -> tensor<32xf32> {\n %0 = stablehlo.reshape %a : (tensor<4x8xf32>) -> tensor<32xf32>\n return %0 : tensor<32xf32>\n }\n}", "notes": "flatten", "dialect": "stablehlo+func"} {"id": "15_reshape-2d-to-3d", "difficulty": "medium", "nl": "Write a function that reshapes a 12x8 f32 tensor into a 3x4x8 3-D tensor.", "mlir": "module {\n func.func @r(%a: tensor<12x8xf32>) -> tensor<3x4x8xf32> {\n %0 = stablehlo.reshape %a : (tensor<12x8xf32>) -> tensor<3x4x8xf32>\n return %0 : tensor<3x4x8xf32>\n }\n}", "notes": "2D to 3D reshape", "dialect": "stablehlo+func"} {"id": "16_reshape-transpose-chain", "difficulty": "hard", "nl": "Write a function that flattens a 4x8 f32 tensor, then transposes the result — no wait, simpler: reshape a 4x8 tensor into 8x4.", "mlir": "module {\n func.func @r(%a: tensor<4x8xf32>) -> tensor<8x4xf32> {\n %0 = stablehlo.reshape %a : (tensor<4x8xf32>) -> tensor<8x4xf32>\n return %0 : tensor<8x4xf32>\n }\n}", "notes": "reshape shape change", "dialect": "stablehlo+func"} {"id": "17_dot_general-matmul", "difficulty": "medium", "nl": "Write a function that performs a matrix multiplication of a 4x8 f32 tensor and an 8x16 f32 tensor using stablehlo.dot_general.", "mlir": "module {\n func.func @m(%a: tensor<4x8xf32>, %b: tensor<8x16xf32>) -> tensor<4x16xf32> {\n %0 = stablehlo.dot_general %a, %b, contracting_dims = [1] x [0] : (tensor<4x8xf32>, tensor<8x16xf32>) -> tensor<4x16xf32>\n return %0 : tensor<4x16xf32>\n }\n}", "notes": "canonical matmul", "dialect": "stablehlo+func"} {"id": "18_dot_general-square", "difficulty": "medium", "nl": "Write a function that multiplies two 8x8 f32 tensors using stablehlo.dot_general.", "mlir": "module {\n func.func @m(%a: tensor<8x8xf32>, %b: tensor<8x8xf32>) -> tensor<8x8xf32> {\n %0 = stablehlo.dot_general %a, %b, contracting_dims = [1] x [0] : (tensor<8x8xf32>, tensor<8x8xf32>) -> tensor<8x8xf32>\n return %0 : tensor<8x8xf32>\n }\n}", "notes": "square matmul", "dialect": "stablehlo+func"} {"id": "19_dot_general-tall-thin", "difficulty": "medium", "nl": "Multiply a 128x16 f32 tensor by a 16x4 f32 tensor using stablehlo.dot_general.", "mlir": "module {\n func.func @m(%a: tensor<128x16xf32>, %b: tensor<16x4xf32>) -> tensor<128x4xf32> {\n %0 = stablehlo.dot_general %a, %b, contracting_dims = [1] x [0] : (tensor<128x16xf32>, tensor<16x4xf32>) -> tensor<128x4xf32>\n return %0 : tensor<128x4xf32>\n }\n}", "notes": "tall-thin matmul", "dialect": "stablehlo+func"} {"id": "20_add-multiply-chain", "difficulty": "medium", "nl": "Write a function that adds two 1-D f32 tensors and then multiplies the sum by the first input.", "mlir": "module {\n func.func @c(%a: tensor<16xf32>, %b: tensor<16xf32>) -> tensor<16xf32> {\n %0 = stablehlo.add %a, %b : tensor<16xf32>\n %1 = stablehlo.multiply %0, %a : tensor<16xf32>\n return %1 : tensor<16xf32>\n }\n}", "notes": "add-then-multiply", "dialect": "stablehlo+func"} {"id": "21_abs-exp-chain", "difficulty": "medium", "nl": "Write a function that computes the exponential of the absolute value of a 1-D f32 tensor.", "mlir": "module {\n func.func @c(%a: tensor<16xf32>) -> tensor<16xf32> {\n %0 = stablehlo.abs %a : tensor<16xf32>\n %1 = stablehlo.exponential %0 : tensor<16xf32>\n return %1 : tensor<16xf32>\n }\n}", "notes": "abs then exp", "dialect": "stablehlo+func"} {"id": "22_matmul-add-bias", "difficulty": "hard", "nl": "Matrix-multiply a 4x8 f32 tensor by an 8x16 f32 tensor, then add a 4x16 bias tensor.", "mlir": "module {\n func.func @lin(%a: tensor<4x8xf32>, %b: tensor<8x16xf32>, %bias: tensor<4x16xf32>) -> tensor<4x16xf32> {\n %0 = stablehlo.dot_general %a, %b, contracting_dims = [1] x [0] : (tensor<4x8xf32>, tensor<8x16xf32>) -> tensor<4x16xf32>\n %1 = stablehlo.add %0, %bias : tensor<4x16xf32>\n return %1 : tensor<4x16xf32>\n }\n}", "notes": "linear layer", "dialect": "stablehlo+func"} {"id": "23_transpose-matmul", "difficulty": "hard", "nl": "Transpose a 8x4 f32 tensor, then matrix-multiply the result with a 4x16 f32 tensor.", "mlir": "module {\n func.func @tm(%a: tensor<8x4xf32>, %b: tensor<8x16xf32>) -> tensor<4x16xf32> {\n %0 = stablehlo.transpose %a, dims = [1, 0] : (tensor<8x4xf32>) -> tensor<4x8xf32>\n %1 = stablehlo.dot_general %0, %b, contracting_dims = [1] x [0] : (tensor<4x8xf32>, tensor<8x16xf32>) -> tensor<4x16xf32>\n return %1 : tensor<4x16xf32>\n }\n}", "notes": "transpose+matmul", "dialect": "stablehlo+func"} {"id": "24_reshape-add", "difficulty": "medium", "nl": "Reshape a 4x4 f32 tensor into a 16-element 1-D tensor, then add to an existing 16-element tensor.", "mlir": "module {\n func.func @ra(%a: tensor<4x4xf32>, %b: tensor<16xf32>) -> tensor<16xf32> {\n %0 = stablehlo.reshape %a : (tensor<4x4xf32>) -> tensor<16xf32>\n %1 = stablehlo.add %0, %b : tensor<16xf32>\n return %1 : tensor<16xf32>\n }\n}", "notes": "reshape+add", "dialect": "stablehlo+func"} {"id": "25_broadcast-multiply", "difficulty": "hard", "nl": "Broadcast a length-8 1-D f32 tensor to a 4x8 tensor, then multiply with an existing 4x8 tensor.", "mlir": "module {\n func.func @bm(%a: tensor<8xf32>, %b: tensor<4x8xf32>) -> tensor<4x8xf32> {\n %0 = stablehlo.broadcast_in_dim %a, dims = [1] : (tensor<8xf32>) -> tensor<4x8xf32>\n %1 = stablehlo.multiply %0, %b : tensor<4x8xf32>\n return %1 : tensor<4x8xf32>\n }\n}", "notes": "broadcast+multiply", "dialect": "stablehlo+func"} {"id": "26_add-3d", "difficulty": "easy", "nl": "Write a function that adds two 2x3x4 f32 tensors elementwise.", "mlir": "module {\n func.func @a3(%a: tensor<2x3x4xf32>, %b: tensor<2x3x4xf32>) -> tensor<2x3x4xf32> {\n %0 = stablehlo.add %a, %b : tensor<2x3x4xf32>\n return %0 : tensor<2x3x4xf32>\n }\n}", "notes": "3D add", "dialect": "stablehlo+func"} {"id": "27_subtract-bf16", "difficulty": "easy", "nl": "Write a function that subtracts two 16-element bf16 tensors elementwise.", "mlir": "module {\n func.func @s(%a: tensor<16xbf16>, %b: tensor<16xbf16>) -> tensor<16xbf16> {\n %0 = stablehlo.subtract %a, %b : tensor<16xbf16>\n return %0 : tensor<16xbf16>\n }\n}", "notes": "bf16 arithmetic", "dialect": "stablehlo+func"} {"id": "28_dot_general-f16", "difficulty": "medium", "nl": "Multiply two 16x16 f16 tensors using stablehlo.dot_general.", "mlir": "module {\n func.func @m(%a: tensor<16x16xf16>, %b: tensor<16x16xf16>) -> tensor<16x16xf16> {\n %0 = stablehlo.dot_general %a, %b, contracting_dims = [1] x [0] : (tensor<16x16xf16>, tensor<16x16xf16>) -> tensor<16x16xf16>\n return %0 : tensor<16x16xf16>\n }\n}", "notes": "f16 matmul", "dialect": "stablehlo+func"} {"id": "29_add-multiply-abs-chain", "difficulty": "hard", "nl": "Write a function that computes the absolute value of (a + b) * a for two 1-D f32 tensors.", "mlir": "module {\n func.func @c(%a: tensor<16xf32>, %b: tensor<16xf32>) -> tensor<16xf32> {\n %0 = stablehlo.add %a, %b : tensor<16xf32>\n %1 = stablehlo.multiply %0, %a : tensor<16xf32>\n %2 = stablehlo.abs %1 : tensor<16xf32>\n return %2 : tensor<16xf32>\n }\n}", "notes": "3-op chain", "dialect": "stablehlo+func"} {"id": "30_transpose-add", "difficulty": "medium", "nl": "Transpose a 4x4 f32 tensor then add it back to the original.", "mlir": "module {\n func.func @ta(%a: tensor<4x4xf32>) -> tensor<4x4xf32> {\n %0 = stablehlo.transpose %a, dims = [1, 0] : (tensor<4x4xf32>) -> tensor<4x4xf32>\n %1 = stablehlo.add %0, %a : tensor<4x4xf32>\n return %1 : tensor<4x4xf32>\n }\n}", "notes": "symmetric sum", "dialect": "stablehlo+func"}