# Text2CAD-Bench 🏭 [![Version](https://img.shields.io/badge/version-v1.0-blue.svg)](https://github.com/xxx/Text2CAD-Bench) [![License](https://img.shields.io/badge/license-CC%20BY%204.0-green.svg)](https://creativecommons.org/licenses/by/4.0/) **Text2CAD-Bench** is the first comprehensive benchmark for evaluating text-to-CAD generation across geometric complexity and application diversity.

## 📢 News - **[2026.02]** 🎉 v1.0 released with 30% prompts for preview - **[Coming Soon]** v1.1 will include additional evaluation scripts and expanded documentation ## 📖 Overview Text2CAD-Bench comprises **600 human-curated examples** organized into four benchmark levels: | Level | Description | Examples | Key Features | |-------|-------------|----------|--------------| | **L1** | Basic | 200 | Primitives, simple spatial relationships | | **L2** | Intermediate | 200 | Boolean operations, chamfer, fillet, patterns | | **L3** | Advanced | 100 | Sweep, loft, shell, complex surfaces | | **L4** | Real-world | 100 | Multi-domain applications | Each example includes **dual-style prompts**: - **Geometric (Geo)**: Appearance-based descriptions mimicking non-expert users - **Sequence (Seq)**: Procedural descriptions aligned with expert-level CAD conventions ## 📁 Dataset Structure ``` Text2CAD-Bench/ ├── prompts/ # 30% sample prompts (preview) │ ├── L1/ │ │ ├── L1_001_geo │ │ ├── L1_001_seq │ │ └── ... │ ├── L2/ │ ├── L3/ │ └── L4/ ├── evaluation/ # Evaluation scripts │ ├── metrics.py │ ├── evaluate.py │ └── requirements.txt ├── examples/ # Example outputs │ └── visualizations/ └── README.md ``` > ⚠️ **Note**: Ground truth STEP files are not publicly released to prevent benchmark contamination. The 30% prompt samples are provided to demonstrate data distribution and format. For full benchmark access, please contact us. ## 🏆 Leaderboard > 📊 **Interactive Leaderboard**: See [leaderboard](leaderboard.html) for sortable results by different metrics. Final results are **weighted by sample count**: L1 (200, 40%), L2 (200, 40%), L3 (100, 20%). ### General-purpose LLMs (Sorted by CD ↓) | Rank | Model | CD ↓ | IR ↓ | IoU ↑ | |:----:|-------|-----:|-----:|------:| | 🥇 | GPT-5.2 | **63.97** | 30.6% | **0.45** | | 🥈 | Claude-4.5-Sonnet | 66.90 | 41.3% | 0.43 | | 🥉 | DeepSeek-V3.2 | 76.25 | **29.7%** | 0.37 | | 4 | MiniMax M2.11 | 83.16 | 42.7% | 0.37 | | 5 | GLM-4.7 | 84.98 | 35.0% | 0.34 | | 6 | Qwen3-max | 99.21 | 43.2% | 0.28 | ### Domain-specific Models (Sorted by CD ↓) | Rank | Model | CD ↓ | IR ↓ | IoU ↑ | |:----:|-------|-----:|-----:|------:| | 🥇 | CADFusion | **224.35** | 60.5% | 0.03 | | 🥈 | Text2CAD | 248.66 | **7.0%** | 0.05 | | 🥉 | Text2CADQuery | 250.27 | 51.0% | 0.04 | ## 🚀 Quick Start ### Installation ```bash git clone https://github.com/xxx/Text2CAD-Bench.git cd Text2CAD-Bench pip install -r evaluation/requirements.txt ``` ### Evaluation ```python from evaluation import evaluate # Load your model outputs results = evaluate( predictions_dir="path/to/your/outputs", metrics=["CD", "IR", "IoU"] ) print(results.summary()) ``` ### Submit to Leaderboard To submit your results to the leaderboard: 1. Run evaluation on the full benchmark by upload your model. 2. Generate results file using our evaluation script 3. Submit via [Google Form](https://forms.google.com/xxx) or email ```bash python evaluation/generate_submission.py \ --predictions_dir path/to/outputs \ --output submission.json ``` ## 📜 License This work is licensed under a [Creative Commons Attribution 4.0 International License (CC BY 4.0)](https://creativecommons.org/licenses/by/4.0/). You are free to: - **Share** — copy and redistribute the material in any medium or format - **Adapt** — remix, transform, and build upon the material for any purpose, even commercially Under the following terms: - **Attribution** — You must give appropriate credit, provide a link to the license, and indicate if changes were made. ## 📧 Contact - **Email**: - **Issues**: Please use GitHub Issues for bug reports and feature requests - **Full benchmark access**: Contact us with your affiliation and intended use ## 🙏 Acknowledgements We thank all annotators and reviewers who contributed to the construction of Text2CAD-Bench. ---

Text2CAD-Bench: A Benchmark for LLM-based Text-to-Parametric CAD Generation