Update README.md
Browse files
README.md
CHANGED
|
@@ -1 +1,147 @@
|
|
| 1 |
-
Text2CAD-Bench
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Text2CAD-Bench π
|
| 2 |
+
|
| 3 |
+
[](https://github.com/xxx/Text2CAD-Bench)
|
| 4 |
+
[](https://creativecommons.org/licenses/by/4.0/)
|
| 5 |
+
|
| 6 |
+
**Text2CAD-Bench** is the first comprehensive benchmark for evaluating text-to-CAD generation across geometric complexity and application diversity.
|
| 7 |
+
|
| 8 |
+
|
| 9 |
+
</p>
|
| 10 |
+
|
| 11 |
+
## π’ News
|
| 12 |
+
|
| 13 |
+
- **[2026.02]** π v1.0 released with 30% prompts for preview
|
| 14 |
+
- **[Coming Soon]** v1.1 will include additional evaluation scripts and expanded documentation
|
| 15 |
+
|
| 16 |
+
## π Overview
|
| 17 |
+
|
| 18 |
+
Text2CAD-Bench comprises **600 human-curated examples** organized into four benchmark levels:
|
| 19 |
+
|
| 20 |
+
| Level | Description | Examples | Key Features |
|
| 21 |
+
|-------|-------------|----------|--------------|
|
| 22 |
+
| **L1** | Basic | 200 | Primitives, simple spatial relationships |
|
| 23 |
+
| **L2** | Intermediate | 200 | Boolean operations, chamfer, fillet, patterns |
|
| 24 |
+
| **L3** | Advanced | 100 | Sweep, loft, shell, complex surfaces |
|
| 25 |
+
| **L4** | Real-world | 100 | Multi-domain applications |
|
| 26 |
+
|
| 27 |
+
Each example includes **dual-style prompts**:
|
| 28 |
+
- **Geometric (Geo)**: Appearance-based descriptions mimicking non-expert users
|
| 29 |
+
- **Sequence (Seq)**: Procedural descriptions aligned with expert-level CAD conventions
|
| 30 |
+
|
| 31 |
+
## π Dataset Structure
|
| 32 |
+
|
| 33 |
+
```
|
| 34 |
+
Text2CAD-Bench/
|
| 35 |
+
βββ prompts/ # 30% sample prompts (preview)
|
| 36 |
+
β βββ L1/
|
| 37 |
+
β β βββ L1_001_geo
|
| 38 |
+
β β βββ L1_001_seq
|
| 39 |
+
β β βββ ...
|
| 40 |
+
β βββ L2/
|
| 41 |
+
β βββ L3/
|
| 42 |
+
β βββ L4/
|
| 43 |
+
βββ evaluation/ # Evaluation scripts
|
| 44 |
+
β βββ metrics.py
|
| 45 |
+
β βββ evaluate.py
|
| 46 |
+
β βββ requirements.txt
|
| 47 |
+
βββ examples/ # Example outputs
|
| 48 |
+
β βββ visualizations/
|
| 49 |
+
βββ README.md
|
| 50 |
+
```
|
| 51 |
+
|
| 52 |
+
> β οΈ **Note**: Ground truth STEP files are not publicly released to prevent benchmark contamination. The 30% prompt samples are provided to demonstrate data distribution and format. For full benchmark access, please contact us.
|
| 53 |
+
|
| 54 |
+
## π Leaderboard
|
| 55 |
+
|
| 56 |
+
> π **Interactive Leaderboard**: See [leaderboard](leaderboard.html) for sortable results by different metrics.
|
| 57 |
+
|
| 58 |
+
Final results are **weighted by sample count**: L1 (200, 40%), L2 (200, 40%), L3 (100, 20%).
|
| 59 |
+
|
| 60 |
+
### General-purpose LLMs (Sorted by CD β)
|
| 61 |
+
|
| 62 |
+
| Rank | Model | CD β | IR β | IoU β |
|
| 63 |
+
|:----:|-------|-----:|-----:|------:|
|
| 64 |
+
| π₯ | GPT-5.2 | **63.97** | 30.6% | **0.45** |
|
| 65 |
+
| π₯ | Claude-4.5-Sonnet | 66.90 | 41.3% | 0.43 |
|
| 66 |
+
| π₯ | DeepSeek-V3.2 | 76.25 | **29.7%** | 0.37 |
|
| 67 |
+
| 4 | MiniMax M2.11 | 83.16 | 42.7% | 0.37 |
|
| 68 |
+
| 5 | GLM-4.7 | 84.98 | 35.0% | 0.34 |
|
| 69 |
+
| 6 | Qwen3-max | 99.21 | 43.2% | 0.28 |
|
| 70 |
+
|
| 71 |
+
### Domain-specific Models (Sorted by CD β)
|
| 72 |
+
|
| 73 |
+
| Rank | Model | CD β | IR β | IoU β |
|
| 74 |
+
|:----:|-------|-----:|-----:|------:|
|
| 75 |
+
| π₯ | CADFusion | **224.35** | 60.5% | 0.03 |
|
| 76 |
+
| π₯ | Text2CAD | 248.66 | **7.0%** | 0.05 |
|
| 77 |
+
| π₯ | Text2CADQuery | 250.27 | 51.0% | 0.04 |
|
| 78 |
+
|
| 79 |
+
|
| 80 |
+
|
| 81 |
+
</details>
|
| 82 |
+
|
| 83 |
+
## π Quick Start
|
| 84 |
+
|
| 85 |
+
### Installation
|
| 86 |
+
|
| 87 |
+
```bash
|
| 88 |
+
git clone https://github.com/xxx/Text2CAD-Bench.git
|
| 89 |
+
cd Text2CAD-Bench
|
| 90 |
+
pip install -r evaluation/requirements.txt
|
| 91 |
+
```
|
| 92 |
+
|
| 93 |
+
### Evaluation
|
| 94 |
+
|
| 95 |
+
```python
|
| 96 |
+
from evaluation import evaluate
|
| 97 |
+
|
| 98 |
+
# Load your model outputs
|
| 99 |
+
results = evaluate(
|
| 100 |
+
predictions_dir="path/to/your/outputs",
|
| 101 |
+
metrics=["CD", "IR", "IoU"]
|
| 102 |
+
)
|
| 103 |
+
|
| 104 |
+
print(results.summary())
|
| 105 |
+
```
|
| 106 |
+
|
| 107 |
+
### Submit to Leaderboard
|
| 108 |
+
|
| 109 |
+
To submit your results to the leaderboard:
|
| 110 |
+
|
| 111 |
+
1. Run evaluation on the full benchmark by upload your model.
|
| 112 |
+
2. Generate results file using our evaluation script
|
| 113 |
+
3. Submit via [Google Form](https://forms.google.com/xxx) or email
|
| 114 |
+
|
| 115 |
+
```bash
|
| 116 |
+
python evaluation/generate_submission.py \
|
| 117 |
+
--predictions_dir path/to/outputs \
|
| 118 |
+
--output submission.json
|
| 119 |
+
```
|
| 120 |
+
|
| 121 |
+
|
| 122 |
+
## π License
|
| 123 |
+
|
| 124 |
+
This work is licensed under a [Creative Commons Attribution 4.0 International License (CC BY 4.0)](https://creativecommons.org/licenses/by/4.0/).
|
| 125 |
+
|
| 126 |
+
You are free to:
|
| 127 |
+
- **Share** β copy and redistribute the material in any medium or format
|
| 128 |
+
- **Adapt** β remix, transform, and build upon the material for any purpose, even commercially
|
| 129 |
+
|
| 130 |
+
Under the following terms:
|
| 131 |
+
- **Attribution** β You must give appropriate credit, provide a link to the license, and indicate if changes were made.
|
| 132 |
+
|
| 133 |
+
## π§ Contact
|
| 134 |
+
|
| 135 |
+
- **Email**:
|
| 136 |
+
- **Issues**: Please use GitHub Issues for bug reports and feature requests
|
| 137 |
+
- **Full benchmark access**: Contact us with your affiliation and intended use
|
| 138 |
+
|
| 139 |
+
## π Acknowledgements
|
| 140 |
+
|
| 141 |
+
We thank all annotators and reviewers who contributed to the construction of Text2CAD-Bench.
|
| 142 |
+
|
| 143 |
+
---
|
| 144 |
+
|
| 145 |
+
<p align="center">
|
| 146 |
+
<i>Text2CAD-Bench: A Benchmark for LLM-based Text-to-Parametric CAD Generation</i>
|
| 147 |
+
</p>
|