zhoupingyi commited on
Commit
d279066
Β·
verified Β·
1 Parent(s): dc3664e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +147 -1
README.md CHANGED
@@ -1 +1,147 @@
1
- Text2CAD-Bench:A Benchmark of LLMs for generating text-to-parametric CAD models
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Text2CAD-Bench 🏭
2
+
3
+ [![Version](https://img.shields.io/badge/version-v1.0-blue.svg)](https://github.com/xxx/Text2CAD-Bench)
4
+ [![License](https://img.shields.io/badge/license-CC%20BY%204.0-green.svg)](https://creativecommons.org/licenses/by/4.0/)
5
+
6
+ **Text2CAD-Bench** is the first comprehensive benchmark for evaluating text-to-CAD generation across geometric complexity and application diversity.
7
+
8
+
9
+ </p>
10
+
11
+ ## πŸ“’ News
12
+
13
+ - **[2026.02]** πŸŽ‰ v1.0 released with 30% prompts for preview
14
+ - **[Coming Soon]** v1.1 will include additional evaluation scripts and expanded documentation
15
+
16
+ ## πŸ“– Overview
17
+
18
+ Text2CAD-Bench comprises **600 human-curated examples** organized into four benchmark levels:
19
+
20
+ | Level | Description | Examples | Key Features |
21
+ |-------|-------------|----------|--------------|
22
+ | **L1** | Basic | 200 | Primitives, simple spatial relationships |
23
+ | **L2** | Intermediate | 200 | Boolean operations, chamfer, fillet, patterns |
24
+ | **L3** | Advanced | 100 | Sweep, loft, shell, complex surfaces |
25
+ | **L4** | Real-world | 100 | Multi-domain applications |
26
+
27
+ Each example includes **dual-style prompts**:
28
+ - **Geometric (Geo)**: Appearance-based descriptions mimicking non-expert users
29
+ - **Sequence (Seq)**: Procedural descriptions aligned with expert-level CAD conventions
30
+
31
+ ## πŸ“ Dataset Structure
32
+
33
+ ```
34
+ Text2CAD-Bench/
35
+ β”œβ”€β”€ prompts/ # 30% sample prompts (preview)
36
+ β”‚ β”œβ”€β”€ L1/
37
+ β”‚ β”‚ β”œβ”€β”€ L1_001_geo
38
+ β”‚ β”‚ β”œβ”€β”€ L1_001_seq
39
+ β”‚ β”‚ └── ...
40
+ β”‚ β”œβ”€β”€ L2/
41
+ β”‚ β”œβ”€β”€ L3/
42
+ β”‚ └── L4/
43
+ β”œβ”€β”€ evaluation/ # Evaluation scripts
44
+ β”‚ β”œβ”€β”€ metrics.py
45
+ β”‚ β”œβ”€β”€ evaluate.py
46
+ β”‚ └── requirements.txt
47
+ β”œβ”€β”€ examples/ # Example outputs
48
+ β”‚ └── visualizations/
49
+ └── README.md
50
+ ```
51
+
52
+ > ⚠️ **Note**: Ground truth STEP files are not publicly released to prevent benchmark contamination. The 30% prompt samples are provided to demonstrate data distribution and format. For full benchmark access, please contact us.
53
+
54
+ ## πŸ† Leaderboard
55
+
56
+ > πŸ“Š **Interactive Leaderboard**: See [leaderboard](leaderboard.html) for sortable results by different metrics.
57
+
58
+ Final results are **weighted by sample count**: L1 (200, 40%), L2 (200, 40%), L3 (100, 20%).
59
+
60
+ ### General-purpose LLMs (Sorted by CD ↓)
61
+
62
+ | Rank | Model | CD ↓ | IR ↓ | IoU ↑ |
63
+ |:----:|-------|-----:|-----:|------:|
64
+ | πŸ₯‡ | GPT-5.2 | **63.97** | 30.6% | **0.45** |
65
+ | πŸ₯ˆ | Claude-4.5-Sonnet | 66.90 | 41.3% | 0.43 |
66
+ | πŸ₯‰ | DeepSeek-V3.2 | 76.25 | **29.7%** | 0.37 |
67
+ | 4 | MiniMax M2.11 | 83.16 | 42.7% | 0.37 |
68
+ | 5 | GLM-4.7 | 84.98 | 35.0% | 0.34 |
69
+ | 6 | Qwen3-max | 99.21 | 43.2% | 0.28 |
70
+
71
+ ### Domain-specific Models (Sorted by CD ↓)
72
+
73
+ | Rank | Model | CD ↓ | IR ↓ | IoU ↑ |
74
+ |:----:|-------|-----:|-----:|------:|
75
+ | πŸ₯‡ | CADFusion | **224.35** | 60.5% | 0.03 |
76
+ | πŸ₯ˆ | Text2CAD | 248.66 | **7.0%** | 0.05 |
77
+ | πŸ₯‰ | Text2CADQuery | 250.27 | 51.0% | 0.04 |
78
+
79
+
80
+
81
+ </details>
82
+
83
+ ## πŸš€ Quick Start
84
+
85
+ ### Installation
86
+
87
+ ```bash
88
+ git clone https://github.com/xxx/Text2CAD-Bench.git
89
+ cd Text2CAD-Bench
90
+ pip install -r evaluation/requirements.txt
91
+ ```
92
+
93
+ ### Evaluation
94
+
95
+ ```python
96
+ from evaluation import evaluate
97
+
98
+ # Load your model outputs
99
+ results = evaluate(
100
+ predictions_dir="path/to/your/outputs",
101
+ metrics=["CD", "IR", "IoU"]
102
+ )
103
+
104
+ print(results.summary())
105
+ ```
106
+
107
+ ### Submit to Leaderboard
108
+
109
+ To submit your results to the leaderboard:
110
+
111
+ 1. Run evaluation on the full benchmark by upload your model.
112
+ 2. Generate results file using our evaluation script
113
+ 3. Submit via [Google Form](https://forms.google.com/xxx) or email
114
+
115
+ ```bash
116
+ python evaluation/generate_submission.py \
117
+ --predictions_dir path/to/outputs \
118
+ --output submission.json
119
+ ```
120
+
121
+
122
+ ## πŸ“œ License
123
+
124
+ This work is licensed under a [Creative Commons Attribution 4.0 International License (CC BY 4.0)](https://creativecommons.org/licenses/by/4.0/).
125
+
126
+ You are free to:
127
+ - **Share** β€” copy and redistribute the material in any medium or format
128
+ - **Adapt** β€” remix, transform, and build upon the material for any purpose, even commercially
129
+
130
+ Under the following terms:
131
+ - **Attribution** β€” You must give appropriate credit, provide a link to the license, and indicate if changes were made.
132
+
133
+ ## πŸ“§ Contact
134
+
135
+ - **Email**:
136
+ - **Issues**: Please use GitHub Issues for bug reports and feature requests
137
+ - **Full benchmark access**: Contact us with your affiliation and intended use
138
+
139
+ ## πŸ™ Acknowledgements
140
+
141
+ We thank all annotators and reviewers who contributed to the construction of Text2CAD-Bench.
142
+
143
+ ---
144
+
145
+ <p align="center">
146
+ <i>Text2CAD-Bench: A Benchmark for LLM-based Text-to-Parametric CAD Generation</i>
147
+ </p>