File size: 4,678 Bytes
d279066 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 | # Text2CAD-Bench π
[](https://github.com/xxx/Text2CAD-Bench)
[](https://creativecommons.org/licenses/by/4.0/)
**Text2CAD-Bench** is the first comprehensive benchmark for evaluating text-to-CAD generation across geometric complexity and application diversity.
</p>
## π’ News
- **[2026.02]** π v1.0 released with 30% prompts for preview
- **[Coming Soon]** v1.1 will include additional evaluation scripts and expanded documentation
## π Overview
Text2CAD-Bench comprises **600 human-curated examples** organized into four benchmark levels:
| Level | Description | Examples | Key Features |
|-------|-------------|----------|--------------|
| **L1** | Basic | 200 | Primitives, simple spatial relationships |
| **L2** | Intermediate | 200 | Boolean operations, chamfer, fillet, patterns |
| **L3** | Advanced | 100 | Sweep, loft, shell, complex surfaces |
| **L4** | Real-world | 100 | Multi-domain applications |
Each example includes **dual-style prompts**:
- **Geometric (Geo)**: Appearance-based descriptions mimicking non-expert users
- **Sequence (Seq)**: Procedural descriptions aligned with expert-level CAD conventions
## π Dataset Structure
```
Text2CAD-Bench/
βββ prompts/ # 30% sample prompts (preview)
β βββ L1/
β β βββ L1_001_geo
β β βββ L1_001_seq
β β βββ ...
β βββ L2/
β βββ L3/
β βββ L4/
βββ evaluation/ # Evaluation scripts
β βββ metrics.py
β βββ evaluate.py
β βββ requirements.txt
βββ examples/ # Example outputs
β βββ visualizations/
βββ README.md
```
> β οΈ **Note**: Ground truth STEP files are not publicly released to prevent benchmark contamination. The 30% prompt samples are provided to demonstrate data distribution and format. For full benchmark access, please contact us.
## π Leaderboard
> π **Interactive Leaderboard**: See [leaderboard](leaderboard.html) for sortable results by different metrics.
Final results are **weighted by sample count**: L1 (200, 40%), L2 (200, 40%), L3 (100, 20%).
### General-purpose LLMs (Sorted by CD β)
| Rank | Model | CD β | IR β | IoU β |
|:----:|-------|-----:|-----:|------:|
| π₯ | GPT-5.2 | **63.97** | 30.6% | **0.45** |
| π₯ | Claude-4.5-Sonnet | 66.90 | 41.3% | 0.43 |
| π₯ | DeepSeek-V3.2 | 76.25 | **29.7%** | 0.37 |
| 4 | MiniMax M2.11 | 83.16 | 42.7% | 0.37 |
| 5 | GLM-4.7 | 84.98 | 35.0% | 0.34 |
| 6 | Qwen3-max | 99.21 | 43.2% | 0.28 |
### Domain-specific Models (Sorted by CD β)
| Rank | Model | CD β | IR β | IoU β |
|:----:|-------|-----:|-----:|------:|
| π₯ | CADFusion | **224.35** | 60.5% | 0.03 |
| π₯ | Text2CAD | 248.66 | **7.0%** | 0.05 |
| π₯ | Text2CADQuery | 250.27 | 51.0% | 0.04 |
</details>
## π Quick Start
### Installation
```bash
git clone https://github.com/xxx/Text2CAD-Bench.git
cd Text2CAD-Bench
pip install -r evaluation/requirements.txt
```
### Evaluation
```python
from evaluation import evaluate
# Load your model outputs
results = evaluate(
predictions_dir="path/to/your/outputs",
metrics=["CD", "IR", "IoU"]
)
print(results.summary())
```
### Submit to Leaderboard
To submit your results to the leaderboard:
1. Run evaluation on the full benchmark by upload your model.
2. Generate results file using our evaluation script
3. Submit via [Google Form](https://forms.google.com/xxx) or email
```bash
python evaluation/generate_submission.py \
--predictions_dir path/to/outputs \
--output submission.json
```
## π License
This work is licensed under a [Creative Commons Attribution 4.0 International License (CC BY 4.0)](https://creativecommons.org/licenses/by/4.0/).
You are free to:
- **Share** β copy and redistribute the material in any medium or format
- **Adapt** β remix, transform, and build upon the material for any purpose, even commercially
Under the following terms:
- **Attribution** β You must give appropriate credit, provide a link to the license, and indicate if changes were made.
## π§ Contact
- **Email**:
- **Issues**: Please use GitHub Issues for bug reports and feature requests
- **Full benchmark access**: Contact us with your affiliation and intended use
## π Acknowledgements
We thank all annotators and reviewers who contributed to the construction of Text2CAD-Bench.
---
<p align="center">
<i>Text2CAD-Bench: A Benchmark for LLM-based Text-to-Parametric CAD Generation</i>
</p>
|