| --- |
| pretty_name: SciVisAgentBench |
| task_categories: |
| - text-to-3d |
| - other |
| --- |
| |
| # SciVisAgentBench Tasks |
|
|
| This repository is a secondary repo of [SciVisAgentBench](https://github.com/KuangshiAi/SciVisAgentBench), contains scientific data analysis and visualization datasets and tasks for benchmarking scientific visualization agents. |
|
|
| To learn more or contribute to SciVisAgentBench, please visit our [project page](https://scivisagentbench.github.io/). |
|
|
| ## Data Organization |
|
|
| All the volume datasets from http://klacansky.com/open-scivis-datasets/ have been organized into a consistent structure. |
|
|
| ### Directory Structure |
|
|
| The datasets and tasks for ParaView-based visualizations are organized into the `main`, the `sci_volume_data`, and the `chatvis_bench` folders. The `bioimage_data` folder holds tasks for bioimage visualization, and `molecular_vis` folder holds tasks for molecular visualization. The `chatvis_bench` folder contains 20 test cases from the official [ChatVis](https://github.com/tpeterka/ChatVis) benchmark. |
|
|
|
|
| Each dataset in the `main`, the `sci_volume_data`, and the `chatvis_bench` folders follows this structure: |
| ``` |
| dataset_name/ |
| ├── data/ |
| │ ├── dataset_file.raw # The actual data file |
| │ └── dataset_name.txt # Metadata about the dataset |
| ├── GS/ # Ground truth folder (ParaView state + pvpython code) |
| ├── task_description.txt # ParaView visualization task |
| └── visualization_goals.txt # Evaluation criteria for the visualization |
| ``` |
|
|
| ### Available Volume Datasets |
|
|
| - **37 datasets under 512MB** are suggested to be downloaded |
| - **18 datasets over 512MB** are listed but not downloaded |
|
|
| See `datasets_list.md` for a complete list with specifications. And `datasets_info.json` is the complete JSON file with all dataset metadata. |
|
|
| ### Task Descriptions |
|
|
| Each dataset has: |
| 1. **Task descriptions** - Based on dataset type (medical, simulation, molecular, etc.) |
| 2. **Visualization goals** - Evaluation criteria tailored to the dataset characteristics |
| 3. **Ground Truth** - Ground truth pvpython code, ParaView state and screenshots |
|
|
| ## Acknowledgement |
|
|
| SciVisAgentBench was mainly created by Kuangshi Ai (kai@nd.edu), Shusen Liu (liu42@llnl.gov), and Haichao Miao (miao1@llnl.gov). Some of the test cases are provided by Kaiyuan Tang (ktang2@nd.edu) and Jianxin Sun (sunjianxin66@gmail.com). We sincerely thank the open-source community for their invaluable contributions. This project is made possible thanks to the following outstanding projects: |
|
|
| - [ParaView-MCP](https://github.com/LLNL/paraview_mcp) |
| - [Bioimage-agent](https://github.com/LLNL/bioimage-agent) |
| - [ChatVis](https://github.com/tpeterka/ChatVis) |
| - [GMX-VMD-MCP](https://github.com/egtai/gmx-vmd-mcp) |
|
|
| ## License |
|
|
| © 2025 University of Notre Dame. |
| Released under the [License](./LICENSE). |