KuangshiAi
add 5 cases from Jason Sun
614b8d4
|
raw
history blame
2.76 kB
metadata
pretty_name: SciVisAgentBench
task_categories:
  - text-to-3d
  - other

SciVisAgentBench Tasks

This repository is a secondary repo of SciVisAgentBench, contains scientific data analysis and visualization datasets and tasks for benchmarking scientific visualization agents.

Data Organization

All the volume datasets from http://klacansky.com/open-scivis-datasets/ have been organized into a consistent structure.

Directory Structure

The datasets and tasks for ParaView-based visualizations are organized into the main, the sci_volume_data, and the chatvis_bench folders. The bioimage_data folder holds tasks for bioimage visualization, and molecular_vis folder holds tasks for molecular visualization. The chatvis_bench folder contains 20 test cases from the official ChatVis benchmark.

Each dataset in the main, the sci_volume_data, and the chatvis_bench folders follows this structure:

dataset_name/
├── data/
│   ├── dataset_file.raw         # The actual data file
│   └── dataset_name.txt         # Metadata about the dataset
├── GS/                          # Ground truth folder (ParaView state + pvpython code)
├── task_description.txt         # ParaView visualization task
└── visualization_goals.txt      # Evaluation criteria for the visualization

Available Volume Datasets

  • 37 datasets under 512MB are suggested to be downloaded
  • 18 datasets over 512MB are listed but not downloaded

See datasets_list.md for a complete list with specifications. And datasets_info.json is the complete JSON file with all dataset metadata.

Task Descriptions

Each dataset has:

  1. Task descriptions - Based on dataset type (medical, simulation, molecular, etc.)
  2. Visualization goals - Evaluation criteria tailored to the dataset characteristics
  3. Ground Truth - Ground truth pvpython code, ParaView state and screenshots

Acknowledgement

SciVisAgentBench was mainly created by Kuangshi Ai (kai@nd.edu), Shusen Liu (liu42@llnl.gov), and Haichao Miao (miao1@llnl.gov). Some of the test cases are provided by Kaiyuan Tang (ktang2@nd.edu) and Jianxin Sun (sunjianxin66@gmail.com). We sincerely thank the open-source community for their invaluable contributions. This project is made possible thanks to the following outstanding projects:

License

© 2025 University of Notre Dame.
Released under the License.