RFSchemBench / README.md
anonymous-submission042's picture
v1.1: drop question_zh and task_id; add hasSyntheticData + provenance fields
d4c3c2d verified
metadata
language:
  - en
license:
  - cc-by-4.0
  - cc-by-nc-sa-4.0
task_categories:
  - visual-question-answering
  - multiple-choice
pretty_name: RFSchemBench
size_categories:
  - 1K<n<10K
configs:
  - config_name: permissive
    default: true
    data_files:
      - split: test
        path: data/permissive/test-*.parquet
  - config_name: nc_allowed
    data_files:
      - split: test
        path: data/nc_allowed/test-*.parquet
tags:
  - rf
  - circuit
  - schematic
  - multimodal
  - electronic-engineering
  - benchmark
  - vqa

RFSchemBench

A multimodal LLM evaluation benchmark for radio-frequency circuit schematic understanding, organized by a four-level semantic hierarchy:

  1. Component Understanding — visible component, parameter, label, and supply-rail recognition.
  2. Structural Understanding — net membership, pin-to-net mapping, boundary connectivity, and pair-via-net topological reasoning.
  3. Functional Understanding — circuit functional role, signal-form classification, supply strategy, sub-type identification.
  4. Dynamic Reasoning — counterfactual plot choice and schematic-modification ↔ simulation-result matching, grounded in ngspice simulation.

The benchmark contains 2,348 questions across 590 rendered schematic pages from publicly available RF schematic data.

Quick start

from datasets import load_dataset

# Permissive subset (CC-BY-4.0; recommended for most users)
ds = load_dataset("anonymous-submission042/RFSchemBench", "permissive", split="test")

# Full benchmark including a NonCommercial-ShareAlike subset
ds_nc = load_dataset("anonymous-submission042/RFSchemBench", "nc_allowed", split="test")

print(ds[0]["question"], "→ answer:", ds[0]["answer"])
ds[0]["image"].show()  # PIL.Image of the schematic

Configurations

Config Rows License Notes
permissive (default) 2,258 CC-BY-4.0 Excludes the NC-licensed source class. Suitable for commercial / industrial reviewers.
nc_allowed 2,348 mixed CC-BY-4.0 + CC-BY-NC-SA-4.0 Full benchmark. Per-row license field marks which items are NC-licensed. NonCommercial usage only.

Schema

Each row has the following fields:

Field Type Description
question_id string Unique identifier (stable across releases)
item_id string Source schematic identifier
source string Source class (qucs / kicad / myriadrf / m17 / oresat)
level string One of Component Understanding / Structural Understanding / Functional Understanding / Dynamic Reasoning
category string Coarse-grained tag
question string English prompt (what models are evaluated on)
image PIL.Image Primary schematic rendering (image.png)
context_images list of {caption, image} Auxiliary context images (Dynamic Reasoning only — schematic plus baseline / variant simulation plots)
options list of {label, text, image} Multi-choice options (Dynamic Reasoning only). Some options have only text, others have both text and image.
answer_type string enum_label / comma_separated_list / integer / short_text
answer_allowed list of string Permitted enum values (empty for non-enum types)
answer string Gold answer; for list-type answers, comma-separated
source_schematic string Provenance: original .kicad_sch / .sch path
license string Per-row license tag (CC-BY-4.0 or CC-BY-NC-SA-4.0)

Construction

The benchmark is constructed via expert-rule-guided programmatic generation from authoritative sources:

  • Domain experts encode question-generation rules and gold-answer semantics into Python programs.
  • Gold answers are extracted deterministically from authoritative source artifacts (KiCad CLI output, Qucs native schematic graph, ngspice simulation outputs).
  • LLMs are deliberately excluded from the gold-answer path; they are used only as an auxiliary RF-relevance gate at the page level.
  • An iterative rule-refinement loop catches edge cases during construction; the released gold answers reflect the latest revisions.

This avoids the gold-answer noise floor of LLM-as-Generator benchmarks while scaling beyond purely human-curated efforts.

License

This dataset is released under a two-tier license model because the upstream sources have heterogeneous licenses:

  • permissive config (recommended default): all rows under CC-BY-4.0. Compatible with commercial use, redistribution, and derivative works subject to attribution.
  • nc_allowed config: includes one source class (m17 digital-radio community hardware, 90 questions) which is upstream-licensed under CC-BY-NC-SA-4.0 (NonCommercial-ShareAlike). Per-row license field marks affected items. Users must respect NC + ShareAlike for those rows.

Per-source licensing summary:

Source class Upstream license profile Tier inclusion
qucs GPL-2.0 example schematics (treated as derivative-work CC-BY-4.0 for image renderings) both
kicad mostly MIT / Apache-2.0 / GPL-3.0 mix both
myriadrf mostly Apache-2.0 / CC-BY-4.0 both
oresat CERN-OHL-S-2.0 (treated as share-alike-compatible CC-BY-4.0 for renderings) both
m17 CC-BY-NC-SA-4.0 ⚠ NC nc_allowed only

For redistribution that requires fully permissive licensing, use only the permissive config.

Limitations

  1. Source-class size imbalance: question counts per source class span 40–974; per-source claims should be reported with N.
  2. Dynamic Reasoning scope: only one source class has the simulation-grounded subset (55 questions). This dimension is reported as a small stress test, not the main result.
  3. Language: questions are evaluated in English. (A Chinese parallel set was used internally during construction for human review but is not part of the released schema.)
  4. Single-image protocol: each question is paired with one primary schematic image (Dynamic Reasoning rows additionally provide context plots / option plots).
  5. Anonymized release: this submission account is for double-blind peer review. The dataset will be transferred to the official maintainer account upon acceptance.

Citation

@misc{rfschembench2026,
  title  = {RFSchemBench: A Multi-Source, Hierarchically-Structured Multimodal Benchmark for RF Circuit Schematic Understanding},
  author = {Anonymous},
  year   = {2026},
  note   = {Submitted to NeurIPS 2026 Evaluations \& Datasets Track}
}

Contact

For benchmark integrity issues (gold-answer corrections, RF-gate disputes, parser / scorer concerns), please open a Discussion on this dataset's HuggingFace page. During the double-blind review window, identifying contact details are intentionally withheld.