Text2CAD-Bench / README.md
zhoupingyi's picture
Update README.md
d279066 verified

Text2CAD-Bench 🏭

Version License

Text2CAD-Bench is the first comprehensive benchmark for evaluating text-to-CAD generation across geometric complexity and application diversity.

πŸ“’ News

  • [2026.02] πŸŽ‰ v1.0 released with 30% prompts for preview
  • [Coming Soon] v1.1 will include additional evaluation scripts and expanded documentation

πŸ“– Overview

Text2CAD-Bench comprises 600 human-curated examples organized into four benchmark levels:

Level Description Examples Key Features
L1 Basic 200 Primitives, simple spatial relationships
L2 Intermediate 200 Boolean operations, chamfer, fillet, patterns
L3 Advanced 100 Sweep, loft, shell, complex surfaces
L4 Real-world 100 Multi-domain applications

Each example includes dual-style prompts:

  • Geometric (Geo): Appearance-based descriptions mimicking non-expert users
  • Sequence (Seq): Procedural descriptions aligned with expert-level CAD conventions

πŸ“ Dataset Structure

Text2CAD-Bench/
β”œβ”€β”€ prompts/                    # 30% sample prompts (preview)
β”‚   β”œβ”€β”€ L1/
β”‚   β”‚   β”œβ”€β”€ L1_001_geo
β”‚   β”‚   β”œβ”€β”€ L1_001_seq
β”‚   β”‚   └── ...
β”‚   β”œβ”€β”€ L2/
β”‚   β”œβ”€β”€ L3/
β”‚   └── L4/
β”œβ”€β”€ evaluation/                 # Evaluation scripts
β”‚   β”œβ”€β”€ metrics.py
β”‚   β”œβ”€β”€ evaluate.py
β”‚   └── requirements.txt
β”œβ”€β”€ examples/                   # Example outputs
β”‚   └── visualizations/
└── README.md

⚠️ Note: Ground truth STEP files are not publicly released to prevent benchmark contamination. The 30% prompt samples are provided to demonstrate data distribution and format. For full benchmark access, please contact us.

πŸ† Leaderboard

πŸ“Š Interactive Leaderboard: See leaderboard for sortable results by different metrics.

Final results are weighted by sample count: L1 (200, 40%), L2 (200, 40%), L3 (100, 20%).

General-purpose LLMs (Sorted by CD ↓)

Rank Model CD ↓ IR ↓ IoU ↑
πŸ₯‡ GPT-5.2 63.97 30.6% 0.45
πŸ₯ˆ Claude-4.5-Sonnet 66.90 41.3% 0.43
πŸ₯‰ DeepSeek-V3.2 76.25 29.7% 0.37
4 MiniMax M2.11 83.16 42.7% 0.37
5 GLM-4.7 84.98 35.0% 0.34
6 Qwen3-max 99.21 43.2% 0.28

Domain-specific Models (Sorted by CD ↓)

Rank Model CD ↓ IR ↓ IoU ↑
πŸ₯‡ CADFusion 224.35 60.5% 0.03
πŸ₯ˆ Text2CAD 248.66 7.0% 0.05
πŸ₯‰ Text2CADQuery 250.27 51.0% 0.04

πŸš€ Quick Start

Installation

git clone https://github.com/xxx/Text2CAD-Bench.git
cd Text2CAD-Bench
pip install -r evaluation/requirements.txt

Evaluation

from evaluation import evaluate

# Load your model outputs
results = evaluate(
    predictions_dir="path/to/your/outputs",
    metrics=["CD", "IR", "IoU"]
)

print(results.summary())

Submit to Leaderboard

To submit your results to the leaderboard:

  1. Run evaluation on the full benchmark by upload your model.
  2. Generate results file using our evaluation script
  3. Submit via Google Form or email
python evaluation/generate_submission.py \
    --predictions_dir path/to/outputs \
    --output submission.json

πŸ“œ License

This work is licensed under a Creative Commons Attribution 4.0 International License (CC BY 4.0).

You are free to:

  • Share β€” copy and redistribute the material in any medium or format
  • Adapt β€” remix, transform, and build upon the material for any purpose, even commercially

Under the following terms:

  • Attribution β€” You must give appropriate credit, provide a link to the license, and indicate if changes were made.

πŸ“§ Contact

  • Email:
  • Issues: Please use GitHub Issues for bug reports and feature requests
  • Full benchmark access: Contact us with your affiliation and intended use

πŸ™ Acknowledgements

We thank all annotators and reviewers who contributed to the construction of Text2CAD-Bench.


Text2CAD-Bench: A Benchmark for LLM-based Text-to-Parametric CAD Generation