anonymous-submission042 commited on
Commit
457e886
·
verified ·
1 Parent(s): 6d96a18

Initial upload: RFSchemBench v1.0.0 (permissive + nc_allowed)

Browse files
README.md ADDED
@@ -0,0 +1,141 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ - zh
5
+ license:
6
+ - cc-by-4.0
7
+ - cc-by-nc-sa-4.0
8
+ task_categories:
9
+ - visual-question-answering
10
+ - multiple-choice
11
+ pretty_name: RFSchemBench
12
+ size_categories:
13
+ - 1K<n<10K
14
+ configs:
15
+ - config_name: permissive
16
+ default: true
17
+ data_files:
18
+ - split: test
19
+ path: data/permissive/test-*.parquet
20
+ - config_name: nc_allowed
21
+ data_files:
22
+ - split: test
23
+ path: data/nc_allowed/test-*.parquet
24
+ tags:
25
+ - rf
26
+ - circuit
27
+ - schematic
28
+ - multimodal
29
+ - electronic-engineering
30
+ - benchmark
31
+ - vqa
32
+ ---
33
+
34
+ # RFSchemBench
35
+
36
+ A multimodal LLM evaluation benchmark for **radio-frequency circuit schematic understanding**, organized by a four-level semantic hierarchy:
37
+
38
+ 1. **Component Understanding** — visible component, parameter, label, and supply-rail recognition.
39
+ 2. **Structural Understanding** — net membership, pin-to-net mapping, boundary connectivity, and pair-via-net topological reasoning.
40
+ 3. **Functional Understanding** — circuit functional role, signal-form classification, supply strategy, sub-type identification.
41
+ 4. **Dynamic Reasoning** — counterfactual plot choice and schematic-modification ↔ simulation-result matching, grounded in `ngspice` simulation.
42
+
43
+ The benchmark contains **2,348 questions across 590 rendered schematic pages** from publicly available RF schematic data.
44
+
45
+ ## Quick start
46
+
47
+ ```python
48
+ from datasets import load_dataset
49
+
50
+ # Permissive subset (CC-BY-4.0; recommended for most users)
51
+ ds = load_dataset("anonymous-submission042/RFSchemBench", "permissive", split="test")
52
+
53
+ # Full benchmark including a NonCommercial-ShareAlike subset
54
+ ds_nc = load_dataset("anonymous-submission042/RFSchemBench", "nc_allowed", split="test")
55
+
56
+ print(ds[0]["question"], "→ answer:", ds[0]["answer"])
57
+ ds[0]["image"].show() # PIL.Image of the schematic
58
+ ```
59
+
60
+ ## Configurations
61
+
62
+ | Config | Rows | License | Notes |
63
+ |---|---:|---|---|
64
+ | `permissive` (default) | 2,258 | `CC-BY-4.0` | Excludes the NC-licensed source class. Suitable for commercial / industrial reviewers. |
65
+ | `nc_allowed` | 2,348 | mixed `CC-BY-4.0` + `CC-BY-NC-SA-4.0` | Full benchmark. Per-row `license` field marks which items are NC-licensed. NonCommercial usage only. |
66
+
67
+ ## Schema
68
+
69
+ Each row has the following fields:
70
+
71
+ | Field | Type | Description |
72
+ |---|---|---|
73
+ | `question_id` | string | Unique identifier (stable across releases) |
74
+ | `item_id` | string | Source schematic identifier |
75
+ | `source` | string | Source class (`qucs` / `kicad` / `myriadrf` / `m17` / `oresat`) |
76
+ | `level` | string | One of `Component Understanding` / `Structural Understanding` / `Functional Understanding` / `Dynamic Reasoning` |
77
+ | `task_id` | string | Internal task type (e.g. `existence`, `pin_to_net`, `counterfactual_plot_choice`); useful for fine-grained analysis |
78
+ | `category` | string | Coarse-grained tag |
79
+ | `question` | string | English prompt (what models are evaluated on) |
80
+ | `question_zh` | string | Chinese translation (parallel; not used for evaluation) |
81
+ | `image` | PIL.Image | Primary schematic rendering (`image.png`) |
82
+ | `context_images` | list of `{caption, image}` | Auxiliary context images (Dynamic Reasoning only — schematic plus baseline / variant simulation plots) |
83
+ | `options` | list of `{label, text, image}` | Multi-choice options (Dynamic Reasoning only). Some options have only `text`, others have both `text` and `image`. |
84
+ | `answer_type` | string | `enum_label` / `comma_separated_list` / `integer` / `short_text` |
85
+ | `answer_allowed` | list of string | Permitted enum values (empty for non-enum types) |
86
+ | `answer` | string | Gold answer; for list-type answers, comma-separated |
87
+ | `source_schematic` | string | Provenance: original `.kicad_sch` / `.sch` path |
88
+ | `license` | string | Per-row license tag (`CC-BY-4.0` or `CC-BY-NC-SA-4.0`) |
89
+
90
+ ## Construction
91
+
92
+ The benchmark is constructed via **expert-rule-guided programmatic generation from authoritative sources**:
93
+
94
+ - Domain experts encode question-generation rules and gold-answer semantics into Python programs.
95
+ - Gold answers are extracted deterministically from authoritative source artifacts (KiCad CLI output, Qucs native schematic graph, `ngspice` simulation outputs).
96
+ - LLMs are deliberately **excluded from the gold-answer path**; they are used only as an auxiliary RF-relevance gate at the page level.
97
+ - An iterative rule-refinement loop catches edge cases during construction; the released gold answers reflect the latest revisions.
98
+
99
+ This avoids the gold-answer noise floor of LLM-as-Generator benchmarks while scaling beyond purely human-curated efforts.
100
+
101
+ ## License
102
+
103
+ This dataset is released under a **two-tier license model** because the upstream sources have heterogeneous licenses:
104
+
105
+ - **`permissive` config** (recommended default): all rows under `CC-BY-4.0`. Compatible with commercial use, redistribution, and derivative works subject to attribution.
106
+ - **`nc_allowed` config**: includes one source class (`m17` digital-radio community hardware, 90 questions) which is upstream-licensed under `CC-BY-NC-SA-4.0` (NonCommercial-ShareAlike). Per-row `license` field marks affected items. Users must respect NC + ShareAlike for those rows.
107
+
108
+ Per-source licensing summary:
109
+
110
+ | Source class | Upstream license profile | Tier inclusion |
111
+ |---|---|---|
112
+ | `qucs` | GPL-2.0 example schematics (treated as derivative-work CC-BY-4.0 for image renderings) | both |
113
+ | `kicad` | mostly MIT / Apache-2.0 / GPL-3.0 mix | both |
114
+ | `myriadrf` | mostly Apache-2.0 / CC-BY-4.0 | both |
115
+ | `oresat` | CERN-OHL-S-2.0 (treated as share-alike-compatible CC-BY-4.0 for renderings) | both |
116
+ | `m17` | **CC-BY-NC-SA-4.0** ⚠ NC | `nc_allowed` only |
117
+
118
+ For redistribution that requires fully permissive licensing, use only the `permissive` config.
119
+
120
+ ## Limitations
121
+
122
+ 1. **Source-class size imbalance**: question counts per source class span 40–974; per-source claims should be reported with N.
123
+ 2. **Dynamic Reasoning scope**: only one source class has the simulation-grounded subset (55 questions). This dimension is reported as a small stress test, not the main result.
124
+ 3. **Language**: questions are evaluated in English; Chinese parallel translations are provided for reference only.
125
+ 4. **Single-image protocol**: each question is paired with one primary schematic image (Dynamic Reasoning rows additionally provide context plots / option plots).
126
+ 5. **Anonymized release**: this submission account is for double-blind peer review. The dataset will be transferred to the official maintainer account upon acceptance.
127
+
128
+ ## Citation
129
+
130
+ ```bibtex
131
+ @misc{rfschembench2026,
132
+ title = {RFSchemBench: A Multi-Source, Hierarchically-Structured Multimodal Benchmark for RF Circuit Schematic Understanding},
133
+ author = {Anonymous},
134
+ year = {2026},
135
+ note = {Submitted to NeurIPS 2026 Evaluations \& Datasets Track}
136
+ }
137
+ ```
138
+
139
+ ## Contact
140
+
141
+ For benchmark integrity issues (gold-answer corrections, RF-gate disputes, parser / scorer concerns), please open a Discussion on this dataset's HuggingFace page. During the double-blind review window, identifying contact details are intentionally withheld.
croissant.json ADDED
@@ -0,0 +1,615 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "@context": {
3
+ "@language": "en",
4
+ "@vocab": "https://schema.org/",
5
+ "citeAs": "cr:citeAs",
6
+ "column": "cr:column",
7
+ "conformsTo": "dct:conformsTo",
8
+ "cr": "http://mlcommons.org/croissant/",
9
+ "rai": "http://mlcommons.org/croissant/RAI/",
10
+ "data": {
11
+ "@id": "cr:data",
12
+ "@type": "@json"
13
+ },
14
+ "dataType": {
15
+ "@id": "cr:dataType",
16
+ "@type": "@vocab"
17
+ },
18
+ "dct": "http://purl.org/dc/terms/",
19
+ "examples": {
20
+ "@id": "cr:examples",
21
+ "@type": "@json"
22
+ },
23
+ "extract": "cr:extract",
24
+ "field": "cr:field",
25
+ "fileProperty": "cr:fileProperty",
26
+ "fileObject": "cr:fileObject",
27
+ "fileSet": "cr:fileSet",
28
+ "format": "cr:format",
29
+ "includes": "cr:includes",
30
+ "isLiveDataset": "cr:isLiveDataset",
31
+ "jsonPath": "cr:jsonPath",
32
+ "key": "cr:key",
33
+ "md5": "cr:md5",
34
+ "parentField": "cr:parentField",
35
+ "path": "cr:path",
36
+ "recordSet": "cr:recordSet",
37
+ "references": "cr:references",
38
+ "regex": "cr:regex",
39
+ "repeated": "cr:repeated",
40
+ "replace": "cr:replace",
41
+ "sc": "https://schema.org/",
42
+ "separator": "cr:separator",
43
+ "source": "cr:source",
44
+ "subField": "cr:subField",
45
+ "transform": "cr:transform"
46
+ },
47
+ "@type": "sc:Dataset",
48
+ "conformsTo": "http://mlcommons.org/croissant/1.0",
49
+ "name": "RFSchemBench",
50
+ "description": "A multimodal LLM evaluation benchmark for RF circuit schematic understanding. 2,348 questions across 590 schematic pages from 5 publicly available RF data sources, organized in a four-level semantic hierarchy: Component Understanding, Structural Understanding, Functional Understanding, and Dynamic Reasoning (simulation-grounded counterfactual plot reasoning). Constructed via expert-rule-guided programmatic generation; LLMs are deliberately excluded from the gold-answer path.",
51
+ "license": "https://creativecommons.org/licenses/by/4.0/",
52
+ "url": "https://huggingface.co/datasets/anonymous-submission042/RFSchemBench",
53
+ "version": "1.0.0",
54
+ "citeAs": "@misc{rfschembench2026, title={RFSchemBench: A Multi-Source, Hierarchically-Structured Multimodal Benchmark for RF Circuit Schematic Understanding}, author={Anonymous}, year={2026}, note={Submitted to NeurIPS 2026 Evaluations \\& Datasets Track}}",
55
+ "creator": {
56
+ "@type": "sc:Organization",
57
+ "name": "Anonymous (double-blind submission)"
58
+ },
59
+ "keywords": [
60
+ "RF",
61
+ "circuit",
62
+ "schematic",
63
+ "multimodal",
64
+ "benchmark",
65
+ "VQA",
66
+ "engineering"
67
+ ],
68
+ "datePublished": "2026-05-01",
69
+ "isLiveDataset": false,
70
+ "distribution": [
71
+ {
72
+ "@type": "cr:FileObject",
73
+ "@id": "permissive-parquet",
74
+ "name": "permissive-parquet",
75
+ "description": "Parquet shard for the `permissive` configuration.",
76
+ "contentUrl": "data/permissive/test-00000-of-00001.parquet",
77
+ "encodingFormat": "application/vnd.apache.parquet",
78
+ "license": "https://creativecommons.org/licenses/by/4.0/",
79
+ "sha256": "4dae49e1fae73019ec4e31b990f5c8622fa48f8ababd66f6a0aa04af9f44ca62"
80
+ },
81
+ {
82
+ "@type": "cr:FileObject",
83
+ "@id": "nc_allowed-parquet",
84
+ "name": "nc_allowed-parquet",
85
+ "description": "Parquet shard for the `nc_allowed` configuration.",
86
+ "contentUrl": "data/nc_allowed/test-00000-of-00001.parquet",
87
+ "encodingFormat": "application/vnd.apache.parquet",
88
+ "license": "https://creativecommons.org/licenses/by-nc-sa/4.0/",
89
+ "sha256": "42bbecbc878815e7f7c5cdda2bbeeceee3ecf9791302b424df8f5997e45e869f"
90
+ }
91
+ ],
92
+ "recordSet": [
93
+ {
94
+ "@type": "cr:RecordSet",
95
+ "@id": "permissive",
96
+ "name": "RFSchemBench - permissive subset",
97
+ "description": "RFSchemBench rows from sources licensed under CC-BY-4.0-compatible terms (excludes the m17 NC-licensed source class). N=2,258.",
98
+ "field": [
99
+ {
100
+ "@type": "cr:Field",
101
+ "@id": "permissive/question_id",
102
+ "name": "question_id",
103
+ "description": "Stable unique identifier",
104
+ "dataType": "sc:Text",
105
+ "source": {
106
+ "fileObject": {
107
+ "@id": "permissive-parquet"
108
+ },
109
+ "extract": {
110
+ "column": "question_id"
111
+ }
112
+ }
113
+ },
114
+ {
115
+ "@type": "cr:Field",
116
+ "@id": "permissive/item_id",
117
+ "name": "item_id",
118
+ "description": "Source schematic identifier",
119
+ "dataType": "sc:Text",
120
+ "source": {
121
+ "fileObject": {
122
+ "@id": "permissive-parquet"
123
+ },
124
+ "extract": {
125
+ "column": "item_id"
126
+ }
127
+ }
128
+ },
129
+ {
130
+ "@type": "cr:Field",
131
+ "@id": "permissive/source",
132
+ "name": "source",
133
+ "description": "Source class: qucs / kicad / myriadrf / m17 / oresat",
134
+ "dataType": "sc:Text",
135
+ "source": {
136
+ "fileObject": {
137
+ "@id": "permissive-parquet"
138
+ },
139
+ "extract": {
140
+ "column": "source"
141
+ }
142
+ }
143
+ },
144
+ {
145
+ "@type": "cr:Field",
146
+ "@id": "permissive/level",
147
+ "name": "level",
148
+ "description": "One of: Component Understanding / Structural Understanding / Functional Understanding / Dynamic Reasoning",
149
+ "dataType": "sc:Text",
150
+ "source": {
151
+ "fileObject": {
152
+ "@id": "permissive-parquet"
153
+ },
154
+ "extract": {
155
+ "column": "level"
156
+ }
157
+ }
158
+ },
159
+ {
160
+ "@type": "cr:Field",
161
+ "@id": "permissive/task_id",
162
+ "name": "task_id",
163
+ "description": "Internal task type (e.g. existence, pin_to_net, counterfactual_plot_choice)",
164
+ "dataType": "sc:Text",
165
+ "source": {
166
+ "fileObject": {
167
+ "@id": "permissive-parquet"
168
+ },
169
+ "extract": {
170
+ "column": "task_id"
171
+ }
172
+ }
173
+ },
174
+ {
175
+ "@type": "cr:Field",
176
+ "@id": "permissive/category",
177
+ "name": "category",
178
+ "description": "Coarse-grained category tag",
179
+ "dataType": "sc:Text",
180
+ "source": {
181
+ "fileObject": {
182
+ "@id": "permissive-parquet"
183
+ },
184
+ "extract": {
185
+ "column": "category"
186
+ }
187
+ }
188
+ },
189
+ {
190
+ "@type": "cr:Field",
191
+ "@id": "permissive/question",
192
+ "name": "question",
193
+ "description": "English question text (what models are evaluated on)",
194
+ "dataType": "sc:Text",
195
+ "source": {
196
+ "fileObject": {
197
+ "@id": "permissive-parquet"
198
+ },
199
+ "extract": {
200
+ "column": "question"
201
+ }
202
+ }
203
+ },
204
+ {
205
+ "@type": "cr:Field",
206
+ "@id": "permissive/question_zh",
207
+ "name": "question_zh",
208
+ "description": "Chinese parallel translation",
209
+ "dataType": "sc:Text",
210
+ "source": {
211
+ "fileObject": {
212
+ "@id": "permissive-parquet"
213
+ },
214
+ "extract": {
215
+ "column": "question_zh"
216
+ }
217
+ }
218
+ },
219
+ {
220
+ "@type": "cr:Field",
221
+ "@id": "permissive/image",
222
+ "name": "image",
223
+ "description": "Primary schematic rendering (PNG bytes embedded)",
224
+ "dataType": "sc:ImageObject",
225
+ "source": {
226
+ "fileObject": {
227
+ "@id": "permissive-parquet"
228
+ },
229
+ "extract": {
230
+ "column": "image"
231
+ },
232
+ "transform": {
233
+ "jsonPath": "$.bytes"
234
+ }
235
+ }
236
+ },
237
+ {
238
+ "@type": "cr:Field",
239
+ "@id": "permissive/context_images",
240
+ "name": "context_images",
241
+ "description": "Auxiliary context images (Dynamic Reasoning rows only). List of {caption, image-bytes}.",
242
+ "dataType": "sc:Text",
243
+ "source": {
244
+ "fileObject": {
245
+ "@id": "permissive-parquet"
246
+ },
247
+ "extract": {
248
+ "column": "context_images"
249
+ }
250
+ }
251
+ },
252
+ {
253
+ "@type": "cr:Field",
254
+ "@id": "permissive/options",
255
+ "name": "options",
256
+ "description": "Multi-choice options for Dynamic Reasoning rows. List of {label, text, image-bytes}.",
257
+ "dataType": "sc:Text",
258
+ "source": {
259
+ "fileObject": {
260
+ "@id": "permissive-parquet"
261
+ },
262
+ "extract": {
263
+ "column": "options"
264
+ }
265
+ }
266
+ },
267
+ {
268
+ "@type": "cr:Field",
269
+ "@id": "permissive/answer_type",
270
+ "name": "answer_type",
271
+ "description": "enum_label / comma_separated_list / integer / short_text",
272
+ "dataType": "sc:Text",
273
+ "source": {
274
+ "fileObject": {
275
+ "@id": "permissive-parquet"
276
+ },
277
+ "extract": {
278
+ "column": "answer_type"
279
+ }
280
+ }
281
+ },
282
+ {
283
+ "@type": "cr:Field",
284
+ "@id": "permissive/answer_allowed",
285
+ "name": "answer_allowed",
286
+ "description": "Permitted enum values (empty for non-enum types)",
287
+ "dataType": "sc:Text",
288
+ "source": {
289
+ "fileObject": {
290
+ "@id": "permissive-parquet"
291
+ },
292
+ "extract": {
293
+ "column": "answer_allowed"
294
+ }
295
+ }
296
+ },
297
+ {
298
+ "@type": "cr:Field",
299
+ "@id": "permissive/answer",
300
+ "name": "answer",
301
+ "description": "Gold answer; for list-type, comma-separated",
302
+ "dataType": "sc:Text",
303
+ "source": {
304
+ "fileObject": {
305
+ "@id": "permissive-parquet"
306
+ },
307
+ "extract": {
308
+ "column": "answer"
309
+ }
310
+ }
311
+ },
312
+ {
313
+ "@type": "cr:Field",
314
+ "@id": "permissive/source_schematic",
315
+ "name": "source_schematic",
316
+ "description": "Provenance: original .kicad_sch / .sch path",
317
+ "dataType": "sc:Text",
318
+ "source": {
319
+ "fileObject": {
320
+ "@id": "permissive-parquet"
321
+ },
322
+ "extract": {
323
+ "column": "source_schematic"
324
+ }
325
+ }
326
+ },
327
+ {
328
+ "@type": "cr:Field",
329
+ "@id": "permissive/license",
330
+ "name": "license",
331
+ "description": "Per-row license tag (CC-BY-4.0 or CC-BY-NC-SA-4.0)",
332
+ "dataType": "sc:Text",
333
+ "source": {
334
+ "fileObject": {
335
+ "@id": "permissive-parquet"
336
+ },
337
+ "extract": {
338
+ "column": "license"
339
+ }
340
+ }
341
+ }
342
+ ]
343
+ },
344
+ {
345
+ "@type": "cr:RecordSet",
346
+ "@id": "nc_allowed",
347
+ "name": "RFSchemBench - NC-allowed full subset",
348
+ "description": "Full RFSchemBench (all 5 source classes, N=2,348). Includes 90 rows under CC-BY-NC-SA-4.0. NonCommercial usage only.",
349
+ "field": [
350
+ {
351
+ "@type": "cr:Field",
352
+ "@id": "nc_allowed/question_id",
353
+ "name": "question_id",
354
+ "description": "Stable unique identifier",
355
+ "dataType": "sc:Text",
356
+ "source": {
357
+ "fileObject": {
358
+ "@id": "nc_allowed-parquet"
359
+ },
360
+ "extract": {
361
+ "column": "question_id"
362
+ }
363
+ }
364
+ },
365
+ {
366
+ "@type": "cr:Field",
367
+ "@id": "nc_allowed/item_id",
368
+ "name": "item_id",
369
+ "description": "Source schematic identifier",
370
+ "dataType": "sc:Text",
371
+ "source": {
372
+ "fileObject": {
373
+ "@id": "nc_allowed-parquet"
374
+ },
375
+ "extract": {
376
+ "column": "item_id"
377
+ }
378
+ }
379
+ },
380
+ {
381
+ "@type": "cr:Field",
382
+ "@id": "nc_allowed/source",
383
+ "name": "source",
384
+ "description": "Source class: qucs / kicad / myriadrf / m17 / oresat",
385
+ "dataType": "sc:Text",
386
+ "source": {
387
+ "fileObject": {
388
+ "@id": "nc_allowed-parquet"
389
+ },
390
+ "extract": {
391
+ "column": "source"
392
+ }
393
+ }
394
+ },
395
+ {
396
+ "@type": "cr:Field",
397
+ "@id": "nc_allowed/level",
398
+ "name": "level",
399
+ "description": "One of: Component Understanding / Structural Understanding / Functional Understanding / Dynamic Reasoning",
400
+ "dataType": "sc:Text",
401
+ "source": {
402
+ "fileObject": {
403
+ "@id": "nc_allowed-parquet"
404
+ },
405
+ "extract": {
406
+ "column": "level"
407
+ }
408
+ }
409
+ },
410
+ {
411
+ "@type": "cr:Field",
412
+ "@id": "nc_allowed/task_id",
413
+ "name": "task_id",
414
+ "description": "Internal task type (e.g. existence, pin_to_net, counterfactual_plot_choice)",
415
+ "dataType": "sc:Text",
416
+ "source": {
417
+ "fileObject": {
418
+ "@id": "nc_allowed-parquet"
419
+ },
420
+ "extract": {
421
+ "column": "task_id"
422
+ }
423
+ }
424
+ },
425
+ {
426
+ "@type": "cr:Field",
427
+ "@id": "nc_allowed/category",
428
+ "name": "category",
429
+ "description": "Coarse-grained category tag",
430
+ "dataType": "sc:Text",
431
+ "source": {
432
+ "fileObject": {
433
+ "@id": "nc_allowed-parquet"
434
+ },
435
+ "extract": {
436
+ "column": "category"
437
+ }
438
+ }
439
+ },
440
+ {
441
+ "@type": "cr:Field",
442
+ "@id": "nc_allowed/question",
443
+ "name": "question",
444
+ "description": "English question text (what models are evaluated on)",
445
+ "dataType": "sc:Text",
446
+ "source": {
447
+ "fileObject": {
448
+ "@id": "nc_allowed-parquet"
449
+ },
450
+ "extract": {
451
+ "column": "question"
452
+ }
453
+ }
454
+ },
455
+ {
456
+ "@type": "cr:Field",
457
+ "@id": "nc_allowed/question_zh",
458
+ "name": "question_zh",
459
+ "description": "Chinese parallel translation",
460
+ "dataType": "sc:Text",
461
+ "source": {
462
+ "fileObject": {
463
+ "@id": "nc_allowed-parquet"
464
+ },
465
+ "extract": {
466
+ "column": "question_zh"
467
+ }
468
+ }
469
+ },
470
+ {
471
+ "@type": "cr:Field",
472
+ "@id": "nc_allowed/image",
473
+ "name": "image",
474
+ "description": "Primary schematic rendering (PNG bytes embedded)",
475
+ "dataType": "sc:ImageObject",
476
+ "source": {
477
+ "fileObject": {
478
+ "@id": "nc_allowed-parquet"
479
+ },
480
+ "extract": {
481
+ "column": "image"
482
+ },
483
+ "transform": {
484
+ "jsonPath": "$.bytes"
485
+ }
486
+ }
487
+ },
488
+ {
489
+ "@type": "cr:Field",
490
+ "@id": "nc_allowed/context_images",
491
+ "name": "context_images",
492
+ "description": "Auxiliary context images (Dynamic Reasoning rows only). List of {caption, image-bytes}.",
493
+ "dataType": "sc:Text",
494
+ "source": {
495
+ "fileObject": {
496
+ "@id": "nc_allowed-parquet"
497
+ },
498
+ "extract": {
499
+ "column": "context_images"
500
+ }
501
+ }
502
+ },
503
+ {
504
+ "@type": "cr:Field",
505
+ "@id": "nc_allowed/options",
506
+ "name": "options",
507
+ "description": "Multi-choice options for Dynamic Reasoning rows. List of {label, text, image-bytes}.",
508
+ "dataType": "sc:Text",
509
+ "source": {
510
+ "fileObject": {
511
+ "@id": "nc_allowed-parquet"
512
+ },
513
+ "extract": {
514
+ "column": "options"
515
+ }
516
+ }
517
+ },
518
+ {
519
+ "@type": "cr:Field",
520
+ "@id": "nc_allowed/answer_type",
521
+ "name": "answer_type",
522
+ "description": "enum_label / comma_separated_list / integer / short_text",
523
+ "dataType": "sc:Text",
524
+ "source": {
525
+ "fileObject": {
526
+ "@id": "nc_allowed-parquet"
527
+ },
528
+ "extract": {
529
+ "column": "answer_type"
530
+ }
531
+ }
532
+ },
533
+ {
534
+ "@type": "cr:Field",
535
+ "@id": "nc_allowed/answer_allowed",
536
+ "name": "answer_allowed",
537
+ "description": "Permitted enum values (empty for non-enum types)",
538
+ "dataType": "sc:Text",
539
+ "source": {
540
+ "fileObject": {
541
+ "@id": "nc_allowed-parquet"
542
+ },
543
+ "extract": {
544
+ "column": "answer_allowed"
545
+ }
546
+ }
547
+ },
548
+ {
549
+ "@type": "cr:Field",
550
+ "@id": "nc_allowed/answer",
551
+ "name": "answer",
552
+ "description": "Gold answer; for list-type, comma-separated",
553
+ "dataType": "sc:Text",
554
+ "source": {
555
+ "fileObject": {
556
+ "@id": "nc_allowed-parquet"
557
+ },
558
+ "extract": {
559
+ "column": "answer"
560
+ }
561
+ }
562
+ },
563
+ {
564
+ "@type": "cr:Field",
565
+ "@id": "nc_allowed/source_schematic",
566
+ "name": "source_schematic",
567
+ "description": "Provenance: original .kicad_sch / .sch path",
568
+ "dataType": "sc:Text",
569
+ "source": {
570
+ "fileObject": {
571
+ "@id": "nc_allowed-parquet"
572
+ },
573
+ "extract": {
574
+ "column": "source_schematic"
575
+ }
576
+ }
577
+ },
578
+ {
579
+ "@type": "cr:Field",
580
+ "@id": "nc_allowed/license",
581
+ "name": "license",
582
+ "description": "Per-row license tag (CC-BY-4.0 or CC-BY-NC-SA-4.0)",
583
+ "dataType": "sc:Text",
584
+ "source": {
585
+ "fileObject": {
586
+ "@id": "nc_allowed-parquet"
587
+ },
588
+ "extract": {
589
+ "column": "license"
590
+ }
591
+ }
592
+ }
593
+ ]
594
+ }
595
+ ],
596
+ "rai:dataCollection": "Schematic source files (.kicad_sch, .qucs) were collected from publicly available GitHub repositories for the qucs / kicad / myriadrf / m17 / oresat ecosystems. Pages were rendered to PNG via KiCad CLI and Qucs native renderers. A page-level Qwen3.6 RF-relevance gate filtered out non-RF pages.",
597
+ "rai:dataCollectionType": "Crawled / programmatically rendered from authoritative open-source schematic repositories.",
598
+ "rai:dataCollectionRawData": "Original `.kicad_sch` and `.qucs` files plus their rendered images.",
599
+ "rai:dataPreprocessingProtocol": "Each schematic page is rendered to a primary PNG image. For Dynamic Reasoning, ngspice simulation outputs and counterfactual variant plots are pre-computed. RF-relevance gating applied at the page level.",
600
+ "rai:dataAnnotationProtocol": "Gold answers are produced deterministically by Python rules authored by domain experts, extracting structured ground truth from KiCad CLI / Qucs / ngspice outputs. LLMs are NOT used in the gold-answer path. Iterative rule refinement: experts review generated questions and patch rules when systematic errors are found; the released release reflects the latest revisions.",
601
+ "rai:dataAnnotationPlatform": "Custom Python pipelines under `qa_pipelines/<source>/` in the source repository.",
602
+ "rai:dataAnnotationAnalysis": "An independent post-hoc gold audit on a stratified sample is planned as part of the validity protocol; results will be reported in the camera-ready paper.",
603
+ "rai:annotationsPerItem": "1 gold answer per question, deterministic from rules.",
604
+ "rai:annotatorDemographics": "Gold answers produced by deterministic rule code; no human annotators per item.",
605
+ "rai:dataUseCases": [
606
+ "Evaluation of multimodal LLMs on RF circuit schematic understanding.",
607
+ "Diagnostic analysis of structural connectivity reasoning in vision-language models.",
608
+ "Reasoning-mode (thinking on/off) ablation studies."
609
+ ],
610
+ "rai:dataLimitations": "Per-source-class question counts are imbalanced (40 to 974); per-source claims should be reported with N. Dynamic Reasoning subset (N=55) exists only on simulation-capable source class. Questions evaluated in English; Chinese parallel translations provided for reference. Single-image-per-question protocol.",
611
+ "rai:dataBiases": "Source-class bias: the largest source class contributes ~41% of questions, which can dominate micro accuracy. We provide a `permissive` config that excludes the NonCommercial source and recommend reporting both micro and source-balanced macro accuracy. Domain bias: schematics are biased toward open-source SDR / amateur radio / satellite hardware; commercial proprietary RF designs are not represented.",
612
+ "rai:dataReleaseMaintenancePlan": "Versioned releases under semver. Discussion thread is open for gold-answer corrections and parser / scorer disputes. After acceptance, this anonymous repository will be transferred to a maintainer account.",
613
+ "rai:personalSensitiveInformation": "None. The dataset contains schematic images from open-source hardware projects, with no personally identifiable information, biometric data, or sensitive content.",
614
+ "rai:dataSocialImpact": "Positive: enables systematic evaluation of multimodal LLMs on RF engineering tasks, supporting safer deployment in EE / RF design workflows. Risk: LLMs that perform well on this benchmark are not certified for production EE design. Use as evaluation tool, not deployment certification."
615
+ }
data/nc_allowed/test-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:42bbecbc878815e7f7c5cdda2bbeeceee3ecf9791302b424df8f5997e45e869f
3
+ size 563110293
data/permissive/test-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4dae49e1fae73019ec4e31b990f5c8622fa48f8ababd66f6a0aa04af9f44ca62
3
+ size 541936367