Text2CAD-Bench π
Text2CAD-Bench is the first comprehensive benchmark for evaluating text-to-CAD generation across geometric complexity and application diversity.
π’ News
- [2026.02] π v1.0 released with 30% prompts for preview
- [Coming Soon] v1.1 will include additional evaluation scripts and expanded documentation
π Overview
Text2CAD-Bench comprises 600 human-curated examples organized into four benchmark levels:
| Level | Description | Examples | Key Features |
|---|---|---|---|
| L1 | Basic | 200 | Primitives, simple spatial relationships |
| L2 | Intermediate | 200 | Boolean operations, chamfer, fillet, patterns |
| L3 | Advanced | 100 | Sweep, loft, shell, complex surfaces |
| L4 | Real-world | 100 | Multi-domain applications |
Each example includes dual-style prompts:
- Geometric (Geo): Appearance-based descriptions mimicking non-expert users
- Sequence (Seq): Procedural descriptions aligned with expert-level CAD conventions
π Dataset Structure
Text2CAD-Bench/
βββ prompts/ # 30% sample prompts (preview)
β βββ L1/
β β βββ L1_001_geo
β β βββ L1_001_seq
β β βββ ...
β βββ L2/
β βββ L3/
β βββ L4/
βββ evaluation/ # Evaluation scripts
β βββ metrics.py
β βββ evaluate.py
β βββ requirements.txt
βββ examples/ # Example outputs
β βββ visualizations/
βββ README.md
β οΈ Note: Ground truth STEP files are not publicly released to prevent benchmark contamination. The 30% prompt samples are provided to demonstrate data distribution and format. For full benchmark access, please contact us.
π Leaderboard
π Interactive Leaderboard: See leaderboard for sortable results by different metrics.
Final results are weighted by sample count: L1 (200, 40%), L2 (200, 40%), L3 (100, 20%).
General-purpose LLMs (Sorted by CD β)
| Rank | Model | CD β | IR β | IoU β |
|---|---|---|---|---|
| π₯ | GPT-5.2 | 63.97 | 30.6% | 0.45 |
| π₯ | Claude-4.5-Sonnet | 66.90 | 41.3% | 0.43 |
| π₯ | DeepSeek-V3.2 | 76.25 | 29.7% | 0.37 |
| 4 | MiniMax M2.11 | 83.16 | 42.7% | 0.37 |
| 5 | GLM-4.7 | 84.98 | 35.0% | 0.34 |
| 6 | Qwen3-max | 99.21 | 43.2% | 0.28 |
Domain-specific Models (Sorted by CD β)
| Rank | Model | CD β | IR β | IoU β |
|---|---|---|---|---|
| π₯ | CADFusion | 224.35 | 60.5% | 0.03 |
| π₯ | Text2CAD | 248.66 | 7.0% | 0.05 |
| π₯ | Text2CADQuery | 250.27 | 51.0% | 0.04 |
π Quick Start
Installation
git clone https://github.com/xxx/Text2CAD-Bench.git
cd Text2CAD-Bench
pip install -r evaluation/requirements.txt
Evaluation
from evaluation import evaluate
# Load your model outputs
results = evaluate(
predictions_dir="path/to/your/outputs",
metrics=["CD", "IR", "IoU"]
)
print(results.summary())
Submit to Leaderboard
To submit your results to the leaderboard:
- Run evaluation on the full benchmark by upload your model.
- Generate results file using our evaluation script
- Submit via Google Form or email
python evaluation/generate_submission.py \
--predictions_dir path/to/outputs \
--output submission.json
π License
This work is licensed under a Creative Commons Attribution 4.0 International License (CC BY 4.0).
You are free to:
- Share β copy and redistribute the material in any medium or format
- Adapt β remix, transform, and build upon the material for any purpose, even commercially
Under the following terms:
- Attribution β You must give appropriate credit, provide a link to the license, and indicate if changes were made.
π§ Contact
- Email:
- Issues: Please use GitHub Issues for bug reports and feature requests
- Full benchmark access: Contact us with your affiliation and intended use
π Acknowledgements
We thank all annotators and reviewers who contributed to the construction of Text2CAD-Bench.
Text2CAD-Bench: A Benchmark for LLM-based Text-to-Parametric CAD Generation