| --- |
| dataset_info: |
| features: |
| - name: file_name |
| dtype: string |
| - name: source_file |
| dtype: string |
| - name: question |
| dtype: string |
| - name: question_type |
| dtype: string |
| - name: question_id |
| dtype: int32 |
| - name: answer |
| dtype: string |
| - name: answer_choices |
| list: string |
| - name: correct_choice_idx |
| dtype: int32 |
| - name: image |
| dtype: image |
| - name: video |
| dtype: video |
| - name: media_type |
| dtype: string |
| splits: |
| - name: test |
| num_bytes: 187015546578 |
| num_examples: 102678 |
| download_size: 175022245655 |
| dataset_size: 187015546578 |
| configs: |
| - config_name: default |
| data_files: |
| - split: test |
| path: data/test-* |
| license: mit |
| task_categories: |
| - visual-question-answering |
| language: |
| - en |
| size_categories: |
| - 100K<n<1M |
| --- |
| |
| # OpenSeeSimE-Structural: Engineering Simulation Visual Question Answering Benchmark |
|
|
| ## Dataset Summary |
|
|
| OpenSeeSimE-Structural is a large-scale benchmark dataset for evaluating vision-language models on structural analysis simulation interpretation tasks. It contains over 100,000 question-answer pairs across parametrically-varied structural simulations including stress analysis, and deformation patterns. |
|
|
| ## Purpose |
|
|
| While vision-language models (VLMs) have shown promise in general visual reasoning, their effectiveness for specialized engineering simulation interpretation remains largely unexplored. This benchmark enables: |
|
|
| - Statistically robust evaluation of VLM performance on engineering visualizations |
| - Assessment across multiple reasoning capabilities (captioning, reasoning, grounding, relationship understanding) |
| - Evaluation using different question types (binary classification, multiple-choice, spatial grounding) |
|
|
| ## Dataset Composition |
|
|
| ### Statistics |
| - **Total instances**: 102,678 question-answer pairs |
| - **Simulation types**: 5 structural models (Dog Bone, Hip Implant, Pressure Vessel, Beams, Wall Bracket) |
| - **Parametric variations**: 1,024 unique instances per base model (4^5 parameter combinations) |
| - **Question categories**: Captioning, Reasoning, Grounding, Relationship Understanding |
| - **Question types**: Binary, Multiple-choice, Spatial grounding |
| - **Media formats**: Both static images (1920×1440 PNG) and videos (Originally Extracted at: 200 frames, 29 fps, 7 seconds) |
|
|
| ### Simulation Parameters |
|
|
| Each base model varies across 5 parameters with 4 values each: |
|
|
| **Dog Bone**: Length, Thickness, Diameter, Axial Load, Bending Load |
| **Hip Implant**: Beam Length, Beam Diameter, Ball Diameter, Axial Load, Bending Load |
| **Pressure Vessel**: Length, Thickness, Diameter, Material, Pressure |
| **Thermal Beam**: Thickness, Bending Load, Young's Modulus, Tensile Yield Strength, Cross Section Shape |
| **Wall Bracket**: Length, Width, Height, Thickness, Bending Force |
|
|
| ### Question Distribution |
|
|
| - **Binary Classification**: 40% (yes/no questions about symmetry, stress types, uniformity, etc.) |
| - **Multiple-Choice**: 30% (4-option questions about deformation direction, stress dominance, magnitude ranges, etc.) |
| - **Spatial Grounding**: 30% (location-based questions with labeled regions A/B/C/D) |
|
|
| ## Data Collection Process |
|
|
| ### Simulation Generation |
| 1. Base models sourced from Ansys Mechanical tutorial files |
| 2. Parametric automation via PyMechanical and PyGeometry interfaces |
| 3. Systematic variation across 5 parameters with 4 linearly-spaced values |
| 4. All simulations solved using finite element analysis with validated convergence settings |
|
|
| ### Ground Truth Extraction |
| Automated extraction eliminates human annotation costs and ensures consistency: |
|
|
| - **Statistical Analysis**: Direct queries on result arrays (max, min, mean, std) |
| - **Distribution Analysis**: Threshold-based classification using coefficient of variation |
| - **Physics-Based Classification**: Stress tensor analysis and mechanics principles |
| - **Spatial Localization**: Color-based region generation with computer vision algorithms |
|
|
| All ground truth derived from numerical simulation results rather than visual interpretation. |
|
|
| ## Preprocessing and Data Format |
|
|
| ### Image Processing |
| - Resolution: 1920×1440 pixels |
| - Format: PNG with lossless compression |
| - Standardized viewing orientations: front, back, left, right, top, bottom, isometric |
| - Consistent color mapping: rainbow gradients (red=maximum, blue=minimum) |
| - Automatic deformation scaling (1.5× relative to maximum dimension) |
|
|
| ### Video Processing |
| - 200 frames at 29 fps (7 seconds duration) |
| - Maximum deformation at frame 100 (temporal midpoint) |
| - H.264 compression at 1920×1440 resolution |
| - Uniform frame sampling for model input (32 frames) |
|
|
| ### Data Fields |
| ```python |
| { |
| 'file_name': str, # Unique identifier |
| 'source_file': str, # Base simulation model |
| 'question': str, # Question text |
| 'question_type': str, # 'Binary', 'Multiple Choice', or 'Spatial' |
| 'question_id': int, # Question identifier (1-20) |
| 'answer': str, # Ground truth answer |
| 'answer_choices': List[str], # Available options |
| 'correct_choice_idx': int, # Index of correct answer |
| 'image': Image, # PIL Image object (1920×1440) |
| 'video': Video, # Video frames |
| 'media_type': str # 'image' or 'video' |
| } |
| ``` |
|
|
| ## Labels |
|
|
| All labels are automatically generated from simulation numerical results: |
|
|
| - **Binary questions**: "Yes" or "No" |
| - **Multiple-choice**: Single letter (A/B/C/D) or descriptive option |
| - **Spatial grounding**: Region label (A/B/C/D) corresponding to labeled visualization locations |
|
|
| Label generation employs domain-specific thresholds: |
| - Uniformity: CV ≤ 0.2 (20%) |
| - Symmetry: 60% of node pairs within 10% tolerance (structural) |
| - Spatial matching: 50-pixel separation for region placement |
|
|
| ## Dataset Splits |
|
|
| - **Test split only**: 102,678 instances |
| - No train/validation splits provided (evaluation benchmark, not for model training) |
| - Representative sampling across all simulation types and question categories |
|
|
| ## Intended Use |
|
|
| ### Primary Use Cases |
| 1. **Benchmark evaluation** of vision-language models on engineering simulation interpretation |
| 2. **Capability assessment** across visual reasoning dimensions (captioning, spatial grounding, relationship understanding) |
| 3. **Transfer learning analysis** from general-domain to specialized technical visual reasoning |
|
|
| ### Out-of-Scope Use |
| - Real-time engineering decision-making without expert validation |
| - Safety-critical applications without human oversight |
| - Generalization to simulation types beyond structural mechanics |
|
|
| ## Limitations |
|
|
| ### Technical Limitations |
| - **Objective tasks only**: Excludes subjective engineering judgments requiring domain expertise |
| - **Single physics domain**: Structural mechanics only (see OpenSeeSimE-Fluid for fluid dynamics) |
| - **Ansys-specific**: Visualizations generated using Ansys Mechanical rendering conventions |
| - **Static parameters**: Fixed material properties and boundary conditions per instance |
| - **2D visualizations**: All inputs are 2D projections of 3D simulations |
|
|
| ### Known Biases |
| - **Color scheme dependency**: Questions exploit default rainbow gradient conventions |
| - **Geometry bias**: Selected simulation types may not represent full diversity of structural analysis applications |
| - **View orientation bias**: Standardized camera positions may not capture all critical simulation features |
|
|
| ## Ethical Considerations |
|
|
| ### Responsible Use |
| - Models evaluated on this benchmark should NOT be deployed for safety-critical engineering decisions without expert validation |
| - Automated interpretation should augment, not replace, human engineering expertise |
| - Users should verify that benchmark performance translates to their specific simulation contexts |
|
|
| ### Data Privacy |
| - All simulations contain no proprietary or confidential engineering data |
| - No personal information collected |
| - Publicly available tutorial files used as base models |
|
|
| ### Environmental Impact |
| - Dataset generation required significant computational CPU resources |
| - Consider environmental cost of large-scale model evaluation on this benchmark |
|
|
| ## License |
|
|
| MIT License - Free for academic and commercial use with attribution |
|
|
| ## Citation |
|
|
| If you use this dataset, please cite: |
|
|
| ```bibtex |
| @article{ezemba2024opensesime, |
| title={OpenSeeSimE: A Large-Scale Benchmark to Assess Vision-Language Model Question Answering Capabilities in Engineering Simulations}, |
| author={Ezemba, Jessica and Pohl, Jason and Tucker, Conrad and McComb, Christopher}, |
| year={2025} |
| } |
| ``` |
|
|
| ## AI Usage Disclosure |
|
|
| ### Dataset Generation |
| - **Simulation automation**: Python scripts with Ansys PyMechanical interface |
| - **Ground truth extraction**: Automated computational protocols (no AI involvement) |
| - **Quality validation**: Expert oversight of automated extraction procedures |
| - **No generative AI** used in dataset creation, labeling, or curation |
|
|
| ### Visualization Generation |
| - Ansys Mechanical rendering engine (deterministic, physics-based) |
| - Standardized color mapping and camera controls |
| - No AI-based image generation or enhancement |
|
|
| ## Contact |
|
|
| **Authors**: Jessica Ezemba (jezemba@andrew.cmu.edu), Jason Pohl, Conrad Tucker, Christopher McComb |
| **Institution**: Department of Mechanical Engineering, Carnegie Mellon University |
|
|
| ## Acknowledgments |
|
|
| - Ansys for providing simulation software and tutorial files |
| - Carnegie Mellon University for computational resources |
| - Reviewers and domain experts who validated the automated extraction protocols |
|
|
| --- |
|
|
| **Version**: 1.0 |
| **Last Updated**: December 2025 |
| **Status**: Complete and stable |