diff --git a/.DS_Store b/.DS_Store new file mode 100644 index 0000000000000000000000000000000000000000..061f4ca73fc52659d635577e6f2a3c2f91409a06 Binary files /dev/null and b/.DS_Store differ diff --git a/.gitattributes b/.gitattributes new file mode 100644 index 0000000000000000000000000000000000000000..0278e9a86d736a6de1274faaa8b627260b7fbdc9 --- /dev/null +++ b/.gitattributes @@ -0,0 +1,11 @@ +*.raw filter=lfs diff=lfs merge=lfs -text +*.tiff filter=lfs diff=lfs merge=lfs -text +*.tif filter=lfs diff=lfs merge=lfs -text +*.pvsm filter=lfs diff=lfs merge=lfs -text +*.vtk filter=lfs diff=lfs merge=lfs -text +*.ex2 filter=lfs diff=lfs merge=lfs -text +*.png filter=lfs diff=lfs merge=lfs -text +*.avi filter=lfs diff=lfs merge=lfs -text +*.glb filter=lfs diff=lfs merge=lfs -text +*.vtr filter=lfs diff=lfs merge=lfs -text +*.nc filter=lfs diff=lfs merge=lfs -text diff --git a/LICENSE b/LICENSE new file mode 100644 index 0000000000000000000000000000000000000000..4411f984e1d7bdf70c24b91deb8bc6b3520320da --- /dev/null +++ b/LICENSE @@ -0,0 +1,8 @@ +Copyright (c) 2025 University of Notre Dame +All rights reserved. + +Permission is hereby granted, free of charge, to designated collaborators of the SciVisAgentBench project, to use this software and associated documentation files (the "Software") solely for the purposes of research and collaboration agreed upon with the University of Notre Dame. + +Any redistribution, modification, sublicensing, or commercial use of this Software is strictly prohibited without prior written permission from the copyright holders. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. diff --git a/README.md b/README.md new file mode 100644 index 0000000000000000000000000000000000000000..365b5ab469a5577c842dde8c0c2cd2aab2b99878 --- /dev/null +++ b/README.md @@ -0,0 +1,62 @@ +# SciVisAgentBench Tasks + +This repository is a secondary repo of [SciVisAgentBench](https://github.com/KuangshiAi/SciVisAgentBench), contains scientific visualization datasets and tasks for benchmarking scientific visualization agents. + +## Download Volume Datasets +`download_and_organize.py` downloads and organizes datasets under 512MB. Make sure you run the following script before you evaluate your agents through [SciVisAgentBench](https://github.com/KuangshiAi/SciVisAgentBench): +```shell +python download_and_organize.py +``` + +## Data Organization + +All the volume datasets from http://klacansky.com/open-scivis-datasets/ have been organized into a consistent structure. + +### Directory Structure + +The datasets and tasks for ParaView-MCP and ChatVis are organized into the `main` and `sci_volume_data` folders, while `napari_mcp_evals` holds tasks and datasets for napari-MCP. + + +Each dataset in the `main` and `sci_volume_data` folders follows this structure: +``` +dataset_name/ +├── data/ +│ ├── dataset_file.raw # The actual data file +│ └── dataset_name.txt # Metadata about the dataset +├── GS/ # Ground truth folder (ParaView state + pvpython code) +├── task_description.txt # ParaView visualization task +└── visualization_goals.txt # Evaluation criteria for the visualization +``` + +### Available Volume Datasets + +- **37 datasets under 512MB** are suggested to be downloaded +- **18 datasets over 512MB** are listed but not downloaded + +See `datasets_list.md` for a complete list with specifications. And `datasets_info.json` is the complete JSON file with all dataset metadata. + +### Task Descriptions + +Each dataset has: +1. **Task descriptions** - Based on dataset type (medical, simulation, molecular, etc.) +2. **Visualization goals** - Evaluation criteria tailored to the dataset characteristics +3. **Ground Truth** - Ground truth pvpython code, ParaView state and screenshots + +## Acknowledgement + +SciVisAgentBench was mainly created by Kuangshi Ai (kai@nd.edu), Shusen Liu (liu42@llnl.gov), and Haichao Miao (miao1@llnl.gov). Some of the test cases are provided by Kaiyuan Tang (ktang2@nd.edu). We sincerely thank the open-source community for their invaluable contributions. This project is made possible thanks to the following outstanding projects: + +- [ParaView-MCP](https://github.com/LLNL/paraview_mcp) +- [Napari-MCP](https://github.com/LLNL/napari-mcp) + +## License + +Copyright (c) 2025 University of Notre Dame +Released under the [License](./LICENSE). +All rights reserved. + +Permission is hereby granted, free of charge, to designated collaborators of the SciVisAgentBench project, to use this software and associated documentation files (the "Software") solely for the purposes of research and collaboration agreed upon with the University of Notre Dame. + +Any redistribution, modification, sublicensing, or commercial use of this Software is strictly prohibited without prior written permission from the copyright holders. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. \ No newline at end of file diff --git a/datasets_info.json b/datasets_info.json new file mode 100644 index 0000000000000000000000000000000000000000..c68803da70d3a90356d5d7dccc2dac7b4d49bec5 --- /dev/null +++ b/datasets_info.json @@ -0,0 +1,827 @@ +[ + { + "id": "fuel", + "name": "Fuel", + "description": "Simulation of fuel injection into a combustion chamber. The higher the density value, the less presence of air.", + "dimensions": "64x64x64", + "width": 64, + "height": 64, + "depth": 64, + "data_type": "uint8", + "spacing": "1x1x1", + "size_mb": 0, + "size_str": "64x64x64 (256.0 kB)", + "download_url": "http://klacansky.com/open-scivis-datasets/fuel/fuel_64x64x64_uint8.raw", + "filename": "fuel_64x64x64_uint8.raw" + }, + { + "id": "marschner_lobb", + "name": "Marschner-Lobb", + "description": "High frequencies where 99% of the sinusoids are right below the Nyquist frequency.", + "dimensions": "41x41x41", + "width": 41, + "height": 41, + "depth": 41, + "data_type": "uint8", + "spacing": "1x1x1", + "size_mb": 0, + "size_str": "41x41x41 (67.3 kB)", + "download_url": "http://klacansky.com/open-scivis-datasets/marschner_lobb/marschner_lobb_41x41x41_uint8.raw", + "filename": "marschner_lobb_41x41x41_uint8.raw" + }, + { + "id": "neghip", + "name": "Neghip", + "description": "Simulation of the spatial probability distribution of the electrons in a high potential protein molecule.", + "dimensions": "64x64x64", + "width": 64, + "height": 64, + "depth": 64, + "data_type": "uint8", + "spacing": "1x1x1", + "size_mb": 0, + "size_str": "64x64x64 (256.0 kB)", + "download_url": "http://klacansky.com/open-scivis-datasets/neghip/neghip_64x64x64_uint8.raw", + "filename": "neghip_64x64x64_uint8.raw" + }, + { + "id": "nucleon", + "name": "Nucleon", + "description": "Simulation of the two-body distribution probability of a nucleon in the atomic nucleus 16O if a second nucleon is known to be positioned at r'=(2 fm,0,0).", + "dimensions": "41x41x41", + "width": 41, + "height": 41, + "depth": 41, + "data_type": "uint8", + "spacing": "1x1x1", + "size_mb": 0, + "size_str": "41x41x41 (67.3 kB)", + "download_url": "http://klacansky.com/open-scivis-datasets/nucleon/nucleon_41x41x41_uint8.raw", + "filename": "nucleon_41x41x41_uint8.raw" + }, + { + "id": "silicium", + "name": "Silicium", + "description": "Simulation of a silicium grid.", + "dimensions": "98x34x34", + "width": 98, + "height": 34, + "depth": 34, + "data_type": "uint8", + "spacing": "1x1x1", + "size_mb": 0, + "size_str": "98x34x34 (110.6 kB)", + "download_url": "http://klacansky.com/open-scivis-datasets/silicium/silicium_98x34x34_uint8.raw", + "filename": "silicium_98x34x34_uint8.raw" + }, + { + "id": "tooth", + "name": "Tooth", + "description": "", + "dimensions": "103x94x161", + "width": 103, + "height": 94, + "depth": 161, + "data_type": "uint8", + "spacing": "1x1x1", + "size_mb": 1.5, + "size_str": "103x94x161 (1.5 MB)", + "download_url": "http://klacansky.com/open-scivis-datasets/tooth/tooth_103x94x161_uint8.raw", + "filename": "tooth_103x94x161_uint8.raw" + }, + { + "id": "blunt_fin", + "name": "Blunt Fin", + "description": "", + "dimensions": "256x128x64", + "width": 256, + "height": 128, + "depth": 64, + "data_type": "uint8", + "spacing": "1x0.75x1", + "size_mb": 2.0, + "size_str": "256x128x64 (2.0 MB)", + "download_url": "http://klacansky.com/open-scivis-datasets/blunt_fin/blunt_fin_256x128x64_uint8.raw", + "filename": "blunt_fin_256x128x64_uint8.raw" + }, + { + "id": "hydrogen_atom", + "name": "Hydrogen Atom", + "description": "Simulation of the spatial probability distribution of the electron in an hydrogen atom, residing in a strong magnetic field.", + "dimensions": "128x128x128", + "width": 128, + "height": 128, + "depth": 128, + "data_type": "uint8", + "spacing": "1x1x1", + "size_mb": 2.0, + "size_str": "128x128x128 (2.0 MB)", + "download_url": "http://klacansky.com/open-scivis-datasets/hydrogen_atom/hydrogen_atom_128x128x128_uint8.raw", + "filename": "hydrogen_atom_128x128x128_uint8.raw" + }, + { + "id": "shockwave", + "name": "Shockwave", + "description": "Simulation of an unsteady interaction of a planar shockwave with a randomly-perturbed contact discontinuity.", + "dimensions": "64x64x512", + "width": 64, + "height": 64, + "depth": 512, + "data_type": "uint8", + "spacing": "1x1x1", + "size_mb": 2.0, + "size_str": "64x64x512 (2.0 MB)", + "download_url": "http://klacansky.com/open-scivis-datasets/shockwave/shockwave_64x64x512_uint8.raw", + "filename": "shockwave_64x64x512_uint8.raw" + }, + { + "id": "frog", + "name": "Frog", + "description": "MRI scan of a frog as part of the Whole Frog Project.", + "dimensions": "256x256x44", + "width": 256, + "height": 256, + "depth": 44, + "data_type": "uint8", + "spacing": "0.5x0.5x1", + "size_mb": 2.8, + "size_str": "256x256x44 (2.8 MB)", + "download_url": "http://klacansky.com/open-scivis-datasets/frog/frog_256x256x44_uint8.raw", + "filename": "frog_256x256x44_uint8.raw" + }, + { + "id": "lobster", + "name": "Lobster", + "description": "CT scan of a lobster contained in a block of resin.", + "dimensions": "301x324x56", + "width": 301, + "height": 324, + "depth": 56, + "data_type": "uint8", + "spacing": "1x1x1.4", + "size_mb": 5.2, + "size_str": "301x324x56 (5.2 MB)", + "download_url": "http://klacansky.com/open-scivis-datasets/lobster/lobster_301x324x56_uint8.raw", + "filename": "lobster_301x324x56_uint8.raw" + }, + { + "id": "mri_ventricles", + "name": "Head MRI CISS", + "description": "1.5T MRT 3D CISS dataset of a human head that highlights the CSF (Cerebro-Spinal-Fluid) filled cavities of the head.", + "dimensions": "256x256x124", + "width": 256, + "height": 256, + "depth": 124, + "data_type": "uint8", + "spacing": "0.9x0.9x0.9", + "size_mb": 7.8, + "size_str": "256x256x124 (7.8 MB)", + "download_url": "http://klacansky.com/open-scivis-datasets/mri_ventricles/mri_ventricles_256x256x124_uint8.raw", + "filename": "mri_ventricles_256x256x124_uint8.raw" + }, + { + "id": "engine", + "name": "Engine", + "description": "CT scan of two cylinders of an engine block.", + "dimensions": "256x256x128", + "width": 256, + "height": 256, + "depth": 128, + "data_type": "uint8", + "spacing": "1x1x1", + "size_mb": 8.0, + "size_str": "256x256x128 (8.0 MB)", + "download_url": "http://klacansky.com/open-scivis-datasets/engine/engine_256x256x128_uint8.raw", + "filename": "engine_256x256x128_uint8.raw" + }, + { + "id": "vis_male", + "name": "Head (Visible Male)", + "description": "Male head scan", + "dimensions": "128x256x256", + "width": 128, + "height": 256, + "depth": 256, + "data_type": "uint8", + "spacing": "1.57774x0.995861x1.00797", + "size_mb": 8.0, + "size_str": "128x256x256 (8.0 MB)", + "download_url": "http://klacansky.com/open-scivis-datasets/vis_male/vis_male_128x256x256_uint8.raw", + "filename": "vis_male_128x256x256_uint8.raw" + }, + { + "id": "statue_leg", + "name": "Leg of Statue", + "description": "CT scan of a leg of a bronze statue.", + "dimensions": "341x341x93", + "width": 341, + "height": 341, + "depth": 93, + "data_type": "uint8", + "spacing": "1x1x4", + "size_mb": 10.3, + "size_str": "341x341x93 (10.3 MB)", + "download_url": "http://klacansky.com/open-scivis-datasets/statue_leg/statue_leg_341x341x93_uint8.raw", + "filename": "statue_leg_341x341x93_uint8.raw" + }, + { + "id": "boston_teapot", + "name": "Boston Teapot", + "description": "CT scan of the SIGGRAPH 1989 teapot with a small version of the AVS lobster inside.", + "dimensions": "256x256x178", + "width": 256, + "height": 256, + "depth": 178, + "data_type": "uint8", + "spacing": "1x1x1", + "size_mb": 11.1, + "size_str": "256x256x178 (11.1 MB)", + "download_url": "http://klacansky.com/open-scivis-datasets/boston_teapot/boston_teapot_256x256x178_uint8.raw", + "filename": "boston_teapot_256x256x178_uint8.raw" + }, + { + "id": "mri_woman", + "name": "MRI Woman", + "description": "MRI scan of a woman's head", + "dimensions": "256x256x109", + "width": 256, + "height": 256, + "depth": 109, + "data_type": "uint16", + "spacing": "1x1x1.5", + "size_mb": 13.6, + "size_str": "256x256x109 (13.6 MB)", + "download_url": "http://klacansky.com/open-scivis-datasets/mri_woman/mri_woman_256x256x109_uint16.raw", + "filename": "mri_woman_256x256x109_uint16.raw" + }, + { + "id": "aneurism", + "name": "Aneurism", + "description": "Rotational C-arm x-ray scan of the arteries of the right half of a human head. A contrast agent was injected into the blood and an aneurism is present.", + "dimensions": "256x256x256", + "width": 256, + "height": 256, + "depth": 256, + "data_type": "uint8", + "spacing": "1x1x1", + "size_mb": 16.0, + "size_str": "256x256x256 (16.0 MB)", + "download_url": "http://klacansky.com/open-scivis-datasets/aneurism/aneurism_256x256x256_uint8.raw", + "filename": "aneurism_256x256x256_uint8.raw" + }, + { + "id": "bonsai", + "name": "Bonsai", + "description": "CT scan of a bonsai tree.", + "dimensions": "256x256x256", + "width": 256, + "height": 256, + "depth": 256, + "data_type": "uint8", + "spacing": "1x1x1", + "size_mb": 16.0, + "size_str": "256x256x256 (16.0 MB)", + "download_url": "http://klacansky.com/open-scivis-datasets/bonsai/bonsai_256x256x256_uint8.raw", + "filename": "bonsai_256x256x256_uint8.raw" + }, + { + "id": "foot", + "name": "Foot", + "description": "Rotational C-arm x-ray scan of a human foot. Tissue and bone are present in the dataset.", + "dimensions": "256x256x256", + "width": 256, + "height": 256, + "depth": 256, + "data_type": "uint8", + "spacing": "1x1x1", + "size_mb": 16.0, + "size_str": "256x256x256 (16.0 MB)", + "download_url": "http://klacansky.com/open-scivis-datasets/foot/foot_256x256x256_uint8.raw", + "filename": "foot_256x256x256_uint8.raw" + }, + { + "id": "skull", + "name": "Skull", + "description": "Rotational C-arm x-ray scan of phantom of a human skull.", + "dimensions": "256x256x256", + "width": 256, + "height": 256, + "depth": 256, + "data_type": "uint8", + "spacing": "1x1x1", + "size_mb": 16.0, + "size_str": "256x256x256 (16.0 MB)", + "download_url": "http://klacansky.com/open-scivis-datasets/skull/skull_256x256x256_uint8.raw", + "filename": "skull_256x256x256_uint8.raw" + }, + { + "id": "csafe_heptane", + "name": "CSAFE Heptane Gas", + "description": "A single time step from a computational simulation of a jet of heptane gas undergoing combustion.", + "dimensions": "302x302x302", + "width": 302, + "height": 302, + "depth": 302, + "data_type": "uint8", + "spacing": "1x1x1", + "size_mb": 26.3, + "size_str": "302x302x302 (26.3 MB)", + "download_url": "http://klacansky.com/open-scivis-datasets/csafe_heptane/csafe_heptane_302x302x302_uint8.raw", + "filename": "csafe_heptane_302x302x302_uint8.raw" + }, + { + "id": "mrt_angio", + "name": "Head MRT Angiography", + "description": "3T MRT Time-of-Flight Angiography dataset of a human head. The dataset has been resampled into an isotropic voxel grid (hence the peculiar slice size).", + "dimensions": "416x512x112", + "width": 416, + "height": 512, + "depth": 112, + "data_type": "uint16", + "spacing": "0.412x0.412x0.412", + "size_mb": 45.5, + "size_str": "416x512x112 (45.5 MB)", + "download_url": "http://klacansky.com/open-scivis-datasets/mrt_angio/mrt_angio_416x512x112_uint16.raw", + "filename": "mrt_angio_416x512x112_uint16.raw" + }, + { + "id": "carp", + "name": "Carp", + "description": "CT scan of a carp fish", + "dimensions": "256x256x512", + "width": 256, + "height": 256, + "depth": 512, + "data_type": "uint16", + "spacing": "0.78125x0.390625x1", + "size_mb": 64.0, + "size_str": "256x256x512 (64.0 MB)", + "download_url": "http://klacansky.com/open-scivis-datasets/carp/carp_256x256x512_uint16.raw", + "filename": "carp_256x256x512_uint16.raw" + }, + { + "id": "tacc_turbulence", + "name": "Isotropic Turbulence", + "description": "The dataset represents a time step from an isotropic turbulence simulation. A single variable, enstrophy, is represented on a Cartesian grid.", + "dimensions": "256x256x256", + "width": 256, + "height": 256, + "depth": 256, + "data_type": "float32", + "spacing": "1x1x1", + "size_mb": 64.0, + "size_str": "256x256x256 (64.0 MB)", + "download_url": "http://klacansky.com/open-scivis-datasets/tacc_turbulence/tacc_turbulence_256x256x256_float32.raw", + "filename": "tacc_turbulence_256x256x256_float32.raw" + }, + { + "id": "stent", + "name": "Stented Abdominal Aorta", + "description": "CT Scan of the abdomen and pelvis. The dataset contains also a stent in the abdominal aorta. No contrast agent was used to enhance the blood vessels.", + "dimensions": "512x512x174", + "width": 512, + "height": 512, + "depth": 174, + "data_type": "uint16", + "spacing": "0.8398x0.8398x3.2", + "size_mb": 87.0, + "size_str": "512x512x174 (87.0 MB)", + "download_url": "http://klacansky.com/open-scivis-datasets/stent/stent_512x512x174_uint16.raw", + "filename": "stent_512x512x174_uint16.raw" + }, + { + "id": "neocortical_layer_1_axons", + "name": "Neocortical Layer 1 Axons", + "description": "Axons in layer 1 of the mouse barrel cortex imaged in vivo.", + "dimensions": "1464x1033x76", + "width": 1464, + "height": 1033, + "depth": 76, + "data_type": "uint8", + "spacing": "1x1x3.4", + "size_mb": 109.6, + "size_str": "1464x1033x76 (109.6 MB)", + "download_url": "http://klacansky.com/open-scivis-datasets/neocortical_layer_1_axons/neocortical_layer_1_axons_1464x1033x76_uint8.raw", + "filename": "neocortical_layer_1_axons_1464x1033x76_uint8.raw" + }, + { + "id": "pancreas", + "name": "Pancreas", + "description": "First scan. The National Institutes of Health Clinical Center performed 82 abdominal contrast enhanced 3D CT scans (~70 seconds after intravenous contrast injection in portal-venous) from 53 male and 27 female subjects. Seventeen of the subjects are healthy kidney donors scanned prior to nephrectomy. The remaining 65 patients were selected by a radiologist from patients who neither had major abdominal pathologies nor pancreatic cancer lesions. Subjects' ages range from 18 to 76 years with a mean age of 46.8 \u00b1 16.7. The CT scans have resolutions of 512x512 pixels with varying pixel sizes and slice thickness between 1.5 - 2.5 mm, acquired on Philips and Siemens MDCT scanners (120 kVp tube voltage). A medical student manually performed slice-by-slice segmentations of the pancreas as ground-truth and these were verified/modified by an experienced radiologist.", + "dimensions": "240x512x512", + "width": 240, + "height": 512, + "depth": 512, + "data_type": "int16", + "spacing": "1.16x1.0x1.0", + "size_mb": 120.0, + "size_str": "240x512x512 (120.0 MB)", + "download_url": "http://klacansky.com/open-scivis-datasets/pancreas/pancreas_240x512x512_int16.raw", + "filename": "pancreas_240x512x512_int16.raw" + }, + { + "id": "duct", + "name": "Duct Flow", + "description": "A wall-bounded flow in a duct.", + "dimensions": "193x194x1000", + "width": 193, + "height": 194, + "depth": 1000, + "data_type": "float32", + "spacing": "1x1x1", + "size_mb": 142.8, + "size_str": "193x194x1000 (142.8 MB)", + "download_url": "http://klacansky.com/open-scivis-datasets/duct/duct_193x194x1000_float32.raw", + "filename": "duct_193x194x1000_float32.raw" + }, + { + "id": "bunny", + "name": "Bunny", + "description": "A CT scan of the Stanford Bunny. The greyscale units are Hounsfield units, denoting electron-density of the subject; the scale units are in millimeters. The scan was completed 28 January 2000.", + "dimensions": "512x512x361", + "width": 512, + "height": 512, + "depth": 361, + "data_type": "uint16", + "spacing": "0.337891x0.337891x0.5", + "size_mb": 180.5, + "size_str": "512x512x361 (180.5 MB)", + "download_url": "http://klacansky.com/open-scivis-datasets/bunny/bunny_512x512x361_uint16.raw", + "filename": "bunny_512x512x361_uint16.raw" + }, + { + "id": "backpack", + "name": "Backpack Scan", + "description": "CT scan of a backpack filled with items.", + "dimensions": "512x512x373", + "width": 512, + "height": 512, + "depth": 373, + "data_type": "uint16", + "spacing": "0.9766x0.9766x1.25", + "size_mb": 186.5, + "size_str": "512x512x373 (186.5 MB)", + "download_url": "http://klacansky.com/open-scivis-datasets/backpack/backpack_512x512x373_uint16.raw", + "filename": "backpack_512x512x373_uint16.raw" + }, + { + "id": "present", + "name": "Christmas Present", + "description": "An industrial CT scan of a christmas present.", + "dimensions": "492x492x442", + "width": 492, + "height": 492, + "depth": 442, + "data_type": "uint16", + "spacing": "1x1x1", + "size_mb": 204.1, + "size_str": "492x492x442 (204.1 MB)", + "download_url": "http://klacansky.com/open-scivis-datasets/present/present_492x492x442_uint16.raw", + "filename": "present_492x492x442_uint16.raw" + }, + { + "id": "prone", + "name": "Colon Prone", + "description": "CT scan of abdomen in prone orientation (back faces ceiling, belly faces table).", + "dimensions": "512x512x463", + "width": 512, + "height": 512, + "depth": 463, + "data_type": "uint16", + "spacing": "0.625x0.625x1.0", + "size_mb": 231.5, + "size_str": "512x512x463 (231.5 MB)", + "download_url": "http://klacansky.com/open-scivis-datasets/prone/prone_512x512x463_uint16.raw", + "filename": "prone_512x512x463_uint16.raw" + }, + { + "id": "christmas_tree", + "name": "Christmas Tree", + "description": "The Christmas tree model was scanned with a Siemens Somatom Plus 4 Volume Zoom Multislice-CT scanner at the general hospital in Vienna.", + "dimensions": "512x499x512", + "width": 512, + "height": 499, + "depth": 512, + "data_type": "uint16", + "spacing": "1x1x1", + "size_mb": 249.5, + "size_str": "512x499x512 (249.5 MB)", + "download_url": "http://klacansky.com/open-scivis-datasets/christmas_tree/christmas_tree_512x499x512_uint16.raw", + "filename": "christmas_tree_512x499x512_uint16.raw" + }, + { + "id": "vertebra", + "name": "Head Aneurism", + "description": "Rotational angiography scan of a head with an aneurysm. Only contrasted blood vessels are visible.", + "dimensions": "512x512x512", + "width": 512, + "height": 512, + "depth": 512, + "data_type": "uint16", + "spacing": "0.1953x0.1953x0.1953", + "size_mb": 256.0, + "size_str": "512x512x512 (256.0 MB)", + "download_url": "http://klacansky.com/open-scivis-datasets/vertebra/vertebra_512x512x512_uint16.raw", + "filename": "vertebra_512x512x512_uint16.raw" + }, + { + "id": "zeiss", + "name": "Zeiss", + "description": "Car part reconstructed from projections.", + "dimensions": "680x680x680", + "width": 680, + "height": 680, + "depth": 680, + "data_type": "uint8", + "spacing": "1x1x1", + "size_mb": 299.9, + "size_str": "680x680x680 (299.9 MB)", + "download_url": "http://klacansky.com/open-scivis-datasets/zeiss/zeiss_680x680x680_uint8.raw", + "filename": "zeiss_680x680x680_uint8.raw" + }, + { + "id": "marmoset_neurons", + "name": "Neurons in Marmoset Visual Cortex", + "description": "Pyramidal neurons in the marmoset primary visual cortex (V1) labeled with green fluorescent protein (GFP) after injection of a psuedotyped G-deleted rabies virus in area V2. The tissue was cleared using the Sca/e technique and imaged on a Olympus 2-photon microscope at 20x magnification.", + "dimensions": "1024x1024x314", + "width": 1024, + "height": 1024, + "depth": 314, + "data_type": "uint8", + "spacing": "0.497x0.497x1.5", + "size_mb": 314.0, + "size_str": "1024x1024x314 (314.0 MB)", + "download_url": "http://klacansky.com/open-scivis-datasets/marmoset_neurons/marmoset_neurons_1024x1024x314_uint8.raw", + "filename": "marmoset_neurons_1024x1024x314_uint8.raw" + }, + { + "id": "magnetic_reconnection", + "name": "Magnetic Reconnection Simulation", + "description": "A single time step from a computational simulation of magnetic reconnection.", + "dimensions": "512x512x512", + "width": 512, + "height": 512, + "depth": 512, + "data_type": "float32", + "spacing": "1x1x1", + "size_mb": 512.0, + "size_str": "512x512x512 (512.0 MB)", + "download_url": "http://klacansky.com/open-scivis-datasets/magnetic_reconnection/magnetic_reconnection_512x512x512_float32.raw", + "filename": "magnetic_reconnection_512x512x512_float32.raw" + }, + { + "id": "stag_beetle", + "name": "Stag Beetle", + "description": "The stag beetle from Georg Glaeser, Vienna University of Applied Arts, Austria, was scanned with an industrial CT by Johannes Kastner, Wels College of Engineering, Austria, and Meister Eduard Gr\u00f6ller, Vienna University of Technology, Austria.", + "dimensions": "832x832x494", + "width": 832, + "height": 832, + "depth": 494, + "data_type": "uint16", + "spacing": "1x1x1", + "size_mb": 652.2, + "size_str": "832x832x494 (652.2 MB)", + "download_url": "http://klacansky.com/open-scivis-datasets/stag_beetle/stag_beetle_832x832x494_uint16.raw", + "filename": "stag_beetle_832x832x494_uint16.raw" + }, + { + "id": "hcci_oh", + "name": "Homogeneous Charge Compression Ignition OH", + "description": "The first timestep of direct numerical simulation of an autoignition phenomena in stratified dimethyl-ether/air turbulent mixtures.", + "dimensions": "560x560x560", + "width": 560, + "height": 560, + "depth": 560, + "data_type": "float32", + "spacing": "1x1x1", + "size_mb": 669.9, + "size_str": "560x560x560 (669.9 MB)", + "download_url": "http://klacansky.com/open-scivis-datasets/hcci_oh/hcci_oh_560x560x560_float32.raw", + "filename": "hcci_oh_560x560x560_float32.raw" + }, + { + "id": "kingsnake", + "name": "Kingsnake", + "description": "Scan of a Lampropeltis getula egg (captive bred by Travis LaDuc; laid on 7 July 2003, growth terminated on 29 August 2003, 54 days after oviposition) for Dr. Timothy Rowe of the Department of Geological Sciences, The University of Texas at Austin.", + "dimensions": "1024x1024x795", + "width": 1024, + "height": 1024, + "depth": 795, + "data_type": "uint8", + "spacing": "0.03174x0.03174x0.0688", + "size_mb": 795.0, + "size_str": "1024x1024x795 (795.0 MB)", + "download_url": "http://klacansky.com/open-scivis-datasets/kingsnake/kingsnake_1024x1024x795_uint8.raw", + "filename": "kingsnake_1024x1024x795_uint8.raw" + }, + { + "id": "pawpawsaurus", + "name": "Pawpawsaurus Campbelli", + "description": "This specimen, the holotype, was collected from the Paw Paw Formation, SMU Loc. No. 263, Tarrant County, Texas. The specimen was scanned along the coronal axis for a total of 1088 slices. Voxel size is 0.2275 mm.", + "dimensions": "958x646x1088", + "width": 958, + "height": 646, + "depth": 1088, + "data_type": "uint16", + "spacing": "0.2275x0.2275x0.2275", + "size_mb": 1331.2, + "size_str": "958x646x1088 (1.3 GB)", + "download_url": "http://klacansky.com/open-scivis-datasets/pawpawsaurus/pawpawsaurus_958x646x1088_uint16.raw", + "filename": "pawpawsaurus_958x646x1088_uint16.raw" + }, + { + "id": "spathorhynchus", + "name": "Spathorhynchus Fossorium", + "description": "This specimen, the holotype, was collected from the Middle Eocene Green River Formation of Sweetwater County, Wyoming on 27 July 1967 by Frank L. Pearce. The specimen was scanned along the coronal axis for a total of 750 slices. Each 1024x1024 pixel slice is 0.047 mm thick, with an interslice spacing of 0.047 mm and a field of reconstruction of 22 mm.", + "dimensions": "1024x1024x750", + "width": 1024, + "height": 1024, + "depth": 750, + "data_type": "uint16", + "spacing": "0.0215x0.0215x0.047", + "size_mb": 1536.0, + "size_str": "1024x1024x750 (1.5 GB)", + "download_url": "http://klacansky.com/open-scivis-datasets/spathorhynchus/spathorhynchus_1024x1024x750_uint16.raw", + "filename": "spathorhynchus_1024x1024x750_uint16.raw" + }, + { + "id": "chameleon", + "name": "Chameleon", + "description": "CT scan of a chameleon.", + "dimensions": "1024x1024x1080", + "width": 1024, + "height": 1024, + "depth": 1080, + "data_type": "uint16", + "spacing": "0.09228515625x0.09228515625x0.105", + "size_mb": 2150.4, + "size_str": "1024x1024x1080 (2.1 GB)", + "download_url": "http://klacansky.com/open-scivis-datasets/chameleon/chameleon_1024x1024x1080_uint16.raw", + "filename": "chameleon_1024x1024x1080_uint16.raw" + }, + { + "id": "beechnut", + "name": "Beechnut", + "description": "A microCT scan of a dried beechnut.", + "dimensions": "1024x1024x1546", + "width": 1024, + "height": 1024, + "depth": 1546, + "data_type": "uint16", + "spacing": "2e-05x2e-05x2e-05", + "size_mb": 3072.0, + "size_str": "1024x1024x1546 (3.0 GB)", + "download_url": "http://klacansky.com/open-scivis-datasets/beechnut/beechnut_1024x1024x1546_uint16.raw", + "filename": "beechnut_1024x1024x1546_uint16.raw" + }, + { + "id": "miranda", + "name": "Rayleigh-Taylor Instability", + "description": "A time step of a density field in a simulation of the mixing transition in Rayleigh-Taylor instability.", + "dimensions": "1024x1024x1024", + "width": 1024, + "height": 1024, + "depth": 1024, + "data_type": "float32", + "spacing": "1x1x1", + "size_mb": 4096.0, + "size_str": "1024x1024x1024 (4.0 GB)", + "download_url": "http://klacansky.com/open-scivis-datasets/miranda/miranda_1024x1024x1024_float32.raw", + "filename": "miranda_1024x1024x1024_float32.raw" + }, + { + "id": "jicf_q", + "name": "Jet In Crossflow", + "description": "Q-criterion of a jet in crossflow created by a direct numerical simulation.", + "dimensions": "1408x1080x1100", + "width": 1408, + "height": 1080, + "depth": 1100, + "data_type": "float32", + "spacing": "1x1x1", + "size_mb": 6348.8, + "size_str": "1408x1080x1100 (6.2 GB)", + "download_url": "http://klacansky.com/open-scivis-datasets/jicf_q/jicf_q_1408x1080x1100_float32.raw", + "filename": "jicf_q_1408x1080x1100_float32.raw" + }, + { + "id": "synthetic_truss_with_five_defects", + "name": "Synthetic Truss Scan", + "description": "A simulated CT scan of a 8x8x8 octet truss with five defects on the front side of the object. The defects are bent strut, broken strut, missing strut, dross, and thin strut.", + "dimensions": "1200x1200x1200", + "width": 1200, + "height": 1200, + "depth": 1200, + "data_type": "float32", + "spacing": "1x1x1", + "size_mb": 6553.6, + "size_str": "1200x1200x1200 (6.4 GB)", + "download_url": "http://klacansky.com/open-scivis-datasets/synthetic_truss_with_five_defects/synthetic_truss_with_five_defects_1200x1200x1200_float32.raw", + "filename": "synthetic_truss_with_five_defects_1200x1200x1200_float32.raw" + }, + { + "id": "richtmyer_meshkov", + "name": "Richtmyer-Meshkov Instability", + "description": "Entropy field (timestep 160) of Richtmyer-Meshkov instability simulation.", + "dimensions": "2048x2048x1920", + "width": 2048, + "height": 2048, + "depth": 1920, + "data_type": "uint8", + "spacing": "1x1x1", + "size_mb": 7680.0, + "size_str": "2048x2048x1920 (7.5 GB)", + "download_url": "http://klacansky.com/open-scivis-datasets/richtmyer_meshkov/richtmyer_meshkov_2048x2048x1920_uint8.raw", + "filename": "richtmyer_meshkov_2048x2048x1920_uint8.raw" + }, + { + "id": "3d_neurons_15_sept_2016", + "name": "3DNeurons15Sept2016", + "description": "The neurons are macaque visual cortical neurons labeled with TdTomato fluorescent proteins.", + "dimensions": "2048x2048x1718", + "width": 2048, + "height": 2048, + "depth": 1718, + "data_type": "uint16", + "spacing": "0.267345x0.267345x0.5", + "size_mb": 13721.6, + "size_str": "2048x2048x1718 (13.4 GB)", + "download_url": "http://klacansky.com/open-scivis-datasets/3d_neurons_15_sept_2016/3d_neurons_15_sept_2016_2048x2048x1718_uint16.raw", + "filename": "3d_neurons_15_sept_2016_2048x2048x1718_uint16.raw" + }, + { + "id": "woodbranch", + "name": "Wood Branch", + "description": "A microCT scan of dried wood branch (hazelnut).", + "dimensions": "2048x2048x2048", + "width": 2048, + "height": 2048, + "depth": 2048, + "data_type": "uint16", + "spacing": "1.8e-05x1.8e-05x1.8e-05", + "size_mb": 16384.0, + "size_str": "2048x2048x2048 (16.0 GB)", + "download_url": "http://klacansky.com/open-scivis-datasets/woodbranch/woodbranch_2048x2048x2048_uint16.raw", + "filename": "woodbranch_2048x2048x2048_uint16.raw" + }, + { + "id": "pig_heart", + "name": "Cardiac Volume (Porcine)", + "description": "Volumes were obtained by way of computed tomography (CT) imaging on excised, postmortem porcine hearts. Alginate curing agents were injected into ventricles to provide rigidity and radiopaque agents were injected into the coronary arteries to distinguish microvasculature from the rest of the tissue.", + "dimensions": "2048x2048x2612", + "width": 2048, + "height": 2048, + "depth": 2612, + "data_type": "int16", + "spacing": "0.03557x0.03557x0.03557", + "size_mb": 20889.6, + "size_str": "2048x2048x2612 (20.4 GB)", + "download_url": "http://klacansky.com/open-scivis-datasets/pig_heart/pig_heart_2048x2048x2612_int16.raw", + "filename": "pig_heart_2048x2048x2612_int16.raw" + }, + { + "id": "isotropic_pressure", + "name": "Forced Isotropic Turbulence", + "description": "Pressure field of a direct numerical simulation of forced isotropic turbulence.", + "dimensions": "4096x4096x4096", + "width": 4096, + "height": 4096, + "depth": 4096, + "data_type": "float32", + "spacing": "1x1x1", + "size_mb": 262144.0, + "size_str": "4096x4096x4096 (256.0 GB)", + "download_url": "http://klacansky.com/open-scivis-datasets/isotropic_pressure/isotropic_pressure_4096x4096x4096_float32.raw", + "filename": "isotropic_pressure_4096x4096x4096_float32.raw" + }, + { + "id": "rotstrat_temperature", + "name": "Rotating Stratified Turbulence", + "description": "Temperature field of a direct numerical simulation of rotating stratified turbulence.", + "dimensions": "4096x4096x4096", + "width": 4096, + "height": 4096, + "depth": 4096, + "data_type": "float32", + "spacing": "1x1x1", + "size_mb": 262144.0, + "size_str": "4096x4096x4096 (256.0 GB)", + "download_url": "http://klacansky.com/open-scivis-datasets/rotstrat_temperature/rotstrat_temperature_4096x4096x4096_float32.raw", + "filename": "rotstrat_temperature_4096x4096x4096_float32.raw" + }, + { + "id": "dns", + "name": "Turbulent Channel Flow", + "description": "A pressure field of a direct numerical simulation of fully developed flow at different Reynolds numbers in a plane channel have been performed with POONGBACK code which uses the spectral numerical method of Kim, Moin and Moser (J. Fluid Mech. vol 177, page 133).", + "dimensions": "10240x7680x1536", + "width": 10240, + "height": 7680, + "depth": 1536, + "data_type": "float64", + "spacing": "1x1x1", + "size_mb": 921600.0, + "size_str": "10240x7680x1536 (900.0 GB)", + "download_url": "http://klacansky.com/open-scivis-datasets/dns/dns_10240x7680x1536_float64.raw", + "filename": "dns_10240x7680x1536_float64.raw" + } +] \ No newline at end of file diff --git a/datasets_list.md b/datasets_list.md new file mode 100644 index 0000000000000000000000000000000000000000..f8830f7c1365a75bb48f1c7f87f21511260d7179 --- /dev/null +++ b/datasets_list.md @@ -0,0 +1,66 @@ +# Open SciVis Datasets Information + +## Datasets under 512MB (will be downloaded) + +| Name | Description | Dimensions | Data Type | Size | Spacing | +|------|-------------|------------|-----------|------|----------| +| Fuel | Simulation of fuel injection into a combustion chamber. The higher the density value, the less presence of air. | 64x64x64 | uint8 | 64x64x64 (256.0 kB) | 1x1x1 | +| Marschner-Lobb | High frequencies where 99% of the sinusoids are right below the Nyquist frequency. | 41x41x41 | uint8 | 41x41x41 (67.3 kB) | 1x1x1 | +| Neghip | Simulation of the spatial probability distribution of the electrons in a high potential protein molecule. | 64x64x64 | uint8 | 64x64x64 (256.0 kB) | 1x1x1 | +| Nucleon | Simulation of the two-body distribution probability of a nucleon in the atomic nucleus 16O if a second nucleon is known to be positioned at r'=(2 fm,0,0). | 41x41x41 | uint8 | 41x41x41 (67.3 kB) | 1x1x1 | +| Silicium | Simulation of a silicium grid. | 98x34x34 | uint8 | 98x34x34 (110.6 kB) | 1x1x1 | +| Tooth | | 103x94x161 | uint8 | 103x94x161 (1.5 MB) | 1x1x1 | +| Blunt Fin | | 256x128x64 | uint8 | 256x128x64 (2.0 MB) | 1x0.75x1 | +| Hydrogen Atom | Simulation of the spatial probability distribution of the electron in an hydrogen atom, residing in a strong magnetic field. | 128x128x128 | uint8 | 128x128x128 (2.0 MB) | 1x1x1 | +| Shockwave | Simulation of an unsteady interaction of a planar shockwave with a randomly-perturbed contact discontinuity. | 64x64x512 | uint8 | 64x64x512 (2.0 MB) | 1x1x1 | +| Frog | MRI scan of a frog as part of the Whole Frog Project. | 256x256x44 | uint8 | 256x256x44 (2.8 MB) | 0.5x0.5x1 | +| Lobster | CT scan of a lobster contained in a block of resin. | 301x324x56 | uint8 | 301x324x56 (5.2 MB) | 1x1x1.4 | +| Head MRI CISS | 1.5T MRT 3D CISS dataset of a human head that highlights the CSF (Cerebro-Spinal-Fluid) filled cavities of the head. | 256x256x124 | uint8 | 256x256x124 (7.8 MB) | 0.9x0.9x0.9 | +| Engine | CT scan of two cylinders of an engine block. | 256x256x128 | uint8 | 256x256x128 (8.0 MB) | 1x1x1 | +| Head (Visible Male) | Male head scan | 128x256x256 | uint8 | 128x256x256 (8.0 MB) | 1.57774x0.995861x1.00797 | +| Leg of Statue | CT scan of a leg of a bronze statue. | 341x341x93 | uint8 | 341x341x93 (10.3 MB) | 1x1x4 | +| Boston Teapot | CT scan of the SIGGRAPH 1989 teapot with a small version of the AVS lobster inside. | 256x256x178 | uint8 | 256x256x178 (11.1 MB) | 1x1x1 | +| MRI Woman | MRI scan of a woman's head | 256x256x109 | uint16 | 256x256x109 (13.6 MB) | 1x1x1.5 | +| Aneurism | Rotational C-arm x-ray scan of the arteries of the right half of a human head. A contrast agent was injected into the blood and an aneurism is present. | 256x256x256 | uint8 | 256x256x256 (16.0 MB) | 1x1x1 | +| Bonsai | CT scan of a bonsai tree. | 256x256x256 | uint8 | 256x256x256 (16.0 MB) | 1x1x1 | +| Foot | Rotational C-arm x-ray scan of a human foot. Tissue and bone are present in the dataset. | 256x256x256 | uint8 | 256x256x256 (16.0 MB) | 1x1x1 | +| Skull | Rotational C-arm x-ray scan of phantom of a human skull. | 256x256x256 | uint8 | 256x256x256 (16.0 MB) | 1x1x1 | +| CSAFE Heptane Gas | A single time step from a computational simulation of a jet of heptane gas undergoing combustion. | 302x302x302 | uint8 | 302x302x302 (26.3 MB) | 1x1x1 | +| Head MRT Angiography | 3T MRT Time-of-Flight Angiography dataset of a human head. The dataset has been resampled into an isotropic voxel grid (hence the peculiar slice size). | 416x512x112 | uint16 | 416x512x112 (45.5 MB) | 0.412x0.412x0.412 | +| Carp | CT scan of a carp fish | 256x256x512 | uint16 | 256x256x512 (64.0 MB) | 0.78125x0.390625x1 | +| Isotropic Turbulence | The dataset represents a time step from an isotropic turbulence simulation. A single variable, enstrophy, is represented on a Cartesian grid. | 256x256x256 | float32 | 256x256x256 (64.0 MB) | 1x1x1 | +| Stented Abdominal Aorta | CT Scan of the abdomen and pelvis. The dataset contains also a stent in the abdominal aorta. No contrast agent was used to enhance the blood vessels. | 512x512x174 | uint16 | 512x512x174 (87.0 MB) | 0.8398x0.8398x3.2 | +| Neocortical Layer 1 Axons | Axons in layer 1 of the mouse barrel cortex imaged in vivo. | 1464x1033x76 | uint8 | 1464x1033x76 (109.6 MB) | 1x1x3.4 | +| Pancreas | First scan. The National Institutes of Health Clinical Center performed 82 abdominal contrast enhanced 3D CT scans (~70 seconds after intravenous contrast injection in portal-venous) from 53 male and 27 female subjects. Seventeen of the subjects are healthy kidney donors scanned prior to nephrectomy. The remaining 65 patients were selected by a radiologist from patients who neither had major abdominal pathologies nor pancreatic cancer lesions. Subjects' ages range from 18 to 76 years with a mean age of 46.8 ± 16.7. The CT scans have resolutions of 512x512 pixels with varying pixel sizes and slice thickness between 1.5 - 2.5 mm, acquired on Philips and Siemens MDCT scanners (120 kVp tube voltage). A medical student manually performed slice-by-slice segmentations of the pancreas as ground-truth and these were verified/modified by an experienced radiologist. | 240x512x512 | int16 | 240x512x512 (120.0 MB) | 1.16x1.0x1.0 | +| Duct Flow | A wall-bounded flow in a duct. | 193x194x1000 | float32 | 193x194x1000 (142.8 MB) | 1x1x1 | +| Bunny | A CT scan of the Stanford Bunny. The greyscale units are Hounsfield units, denoting electron-density of the subject; the scale units are in millimeters. The scan was completed 28 January 2000. | 512x512x361 | uint16 | 512x512x361 (180.5 MB) | 0.337891x0.337891x0.5 | +| Backpack Scan | CT scan of a backpack filled with items. | 512x512x373 | uint16 | 512x512x373 (186.5 MB) | 0.9766x0.9766x1.25 | +| Christmas Present | An industrial CT scan of a christmas present. | 492x492x442 | uint16 | 492x492x442 (204.1 MB) | 1x1x1 | +| Colon Prone | CT scan of abdomen in prone orientation (back faces ceiling, belly faces table). | 512x512x463 | uint16 | 512x512x463 (231.5 MB) | 0.625x0.625x1.0 | +| Christmas Tree | The Christmas tree model was scanned with a Siemens Somatom Plus 4 Volume Zoom Multislice-CT scanner at the general hospital in Vienna. | 512x499x512 | uint16 | 512x499x512 (249.5 MB) | 1x1x1 | +| Head Aneurism | Rotational angiography scan of a head with an aneurysm. Only contrasted blood vessels are visible. | 512x512x512 | uint16 | 512x512x512 (256.0 MB) | 0.1953x0.1953x0.1953 | +| Zeiss | Car part reconstructed from projections. | 680x680x680 | uint8 | 680x680x680 (299.9 MB) | 1x1x1 | +| Neurons in Marmoset Visual Cortex | Pyramidal neurons in the marmoset primary visual cortex (V1) labeled with green fluorescent protein (GFP) after injection of a psuedotyped G-deleted rabies virus in area V2. The tissue was cleared using the Sca/e technique and imaged on a Olympus 2-photon microscope at 20x magnification. | 1024x1024x314 | uint8 | 1024x1024x314 (314.0 MB) | 0.497x0.497x1.5 | + +## Datasets over 512MB (not downloaded) + +| Name | Description | Dimensions | Data Type | Size | Spacing | +|------|-------------|------------|-----------|------|----------| +| Magnetic Reconnection Simulation | A single time step from a computational simulation of magnetic reconnection. | 512x512x512 | float32 | 512x512x512 (512.0 MB) | 1x1x1 | +| Stag Beetle | The stag beetle from Georg Glaeser, Vienna University of Applied Arts, Austria, was scanned with an industrial CT by Johannes Kastner, Wels College of Engineering, Austria, and Meister Eduard Gröller, Vienna University of Technology, Austria. | 832x832x494 | uint16 | 832x832x494 (652.2 MB) | 1x1x1 | +| Homogeneous Charge Compression Ignition OH | The first timestep of direct numerical simulation of an autoignition phenomena in stratified dimethyl-ether/air turbulent mixtures. | 560x560x560 | float32 | 560x560x560 (669.9 MB) | 1x1x1 | +| Kingsnake | Scan of a Lampropeltis getula egg (captive bred by Travis LaDuc; laid on 7 July 2003, growth terminated on 29 August 2003, 54 days after oviposition) for Dr. Timothy Rowe of the Department of Geological Sciences, The University of Texas at Austin. | 1024x1024x795 | uint8 | 1024x1024x795 (795.0 MB) | 0.03174x0.03174x0.0688 | +| Pawpawsaurus Campbelli | This specimen, the holotype, was collected from the Paw Paw Formation, SMU Loc. No. 263, Tarrant County, Texas. The specimen was scanned along the coronal axis for a total of 1088 slices. Voxel size is 0.2275 mm. | 958x646x1088 | uint16 | 958x646x1088 (1.3 GB) | 0.2275x0.2275x0.2275 | +| Spathorhynchus Fossorium | This specimen, the holotype, was collected from the Middle Eocene Green River Formation of Sweetwater County, Wyoming on 27 July 1967 by Frank L. Pearce. The specimen was scanned along the coronal axis for a total of 750 slices. Each 1024x1024 pixel slice is 0.047 mm thick, with an interslice spacing of 0.047 mm and a field of reconstruction of 22 mm. | 1024x1024x750 | uint16 | 1024x1024x750 (1.5 GB) | 0.0215x0.0215x0.047 | +| Chameleon | CT scan of a chameleon. | 1024x1024x1080 | uint16 | 1024x1024x1080 (2.1 GB) | 0.09228515625x0.09228515625x0.105 | +| Beechnut | A microCT scan of a dried beechnut. | 1024x1024x1546 | uint16 | 1024x1024x1546 (3.0 GB) | 2e-05x2e-05x2e-05 | +| Rayleigh-Taylor Instability | A time step of a density field in a simulation of the mixing transition in Rayleigh-Taylor instability. | 1024x1024x1024 | float32 | 1024x1024x1024 (4.0 GB) | 1x1x1 | +| Jet In Crossflow | Q-criterion of a jet in crossflow created by a direct numerical simulation. | 1408x1080x1100 | float32 | 1408x1080x1100 (6.2 GB) | 1x1x1 | +| Synthetic Truss Scan | A simulated CT scan of a 8x8x8 octet truss with five defects on the front side of the object. The defects are bent strut, broken strut, missing strut, dross, and thin strut. | 1200x1200x1200 | float32 | 1200x1200x1200 (6.4 GB) | 1x1x1 | +| Richtmyer-Meshkov Instability | Entropy field (timestep 160) of Richtmyer-Meshkov instability simulation. | 2048x2048x1920 | uint8 | 2048x2048x1920 (7.5 GB) | 1x1x1 | +| 3DNeurons15Sept2016 | The neurons are macaque visual cortical neurons labeled with TdTomato fluorescent proteins. | 2048x2048x1718 | uint16 | 2048x2048x1718 (13.4 GB) | 0.267345x0.267345x0.5 | +| Wood Branch | A microCT scan of dried wood branch (hazelnut). | 2048x2048x2048 | uint16 | 2048x2048x2048 (16.0 GB) | 1.8e-05x1.8e-05x1.8e-05 | +| Cardiac Volume (Porcine) | Volumes were obtained by way of computed tomography (CT) imaging on excised, postmortem porcine hearts. Alginate curing agents were injected into ventricles to provide rigidity and radiopaque agents were injected into the coronary arteries to distinguish microvasculature from the rest of the tissue. | 2048x2048x2612 | int16 | 2048x2048x2612 (20.4 GB) | 0.03557x0.03557x0.03557 | +| Forced Isotropic Turbulence | Pressure field of a direct numerical simulation of forced isotropic turbulence. | 4096x4096x4096 | float32 | 4096x4096x4096 (256.0 GB) | 1x1x1 | +| Rotating Stratified Turbulence | Temperature field of a direct numerical simulation of rotating stratified turbulence. | 4096x4096x4096 | float32 | 4096x4096x4096 (256.0 GB) | 1x1x1 | +| Turbulent Channel Flow | A pressure field of a direct numerical simulation of fully developed flow at different Reynolds numbers in a plane channel have been performed with POONGBACK code which uses the spectral numerical method of Kim, Moin and Moser (J. Fluid Mech. vol 177, page 133). | 10240x7680x1536 | float64 | 10240x7680x1536 (900.0 GB) | 1x1x1 | diff --git a/download_and_organize.py b/download_and_organize.py new file mode 100644 index 0000000000000000000000000000000000000000..2a8c37cc716abf79eb4ae8f7b20c50dc6c36ccff --- /dev/null +++ b/download_and_organize.py @@ -0,0 +1,91 @@ +import json +import os +import subprocess +import time +import zipfile + +# Load dataset information +with open('datasets_info.json', 'r') as f: + datasets = json.load(f) + +# Filter datasets under 512MB, excluding existing ones +existing_datasets = [] +datasets_to_download = [d for d in datasets if d['size_mb'] < 512 and d['id'] not in existing_datasets] + +print(f"Will download and organize {len(datasets_to_download)} datasets") + +# Process each dataset +for i, dataset in enumerate(datasets_to_download): + dataset_id = dataset['id'] + print(f"\n[{i+1}/{len(datasets_to_download)}] Processing {dataset_id}...") + + # Create directory structure + os.makedirs(f"sci_volume_data/{dataset_id}/data", exist_ok=True) + + # Download the dataset + url = dataset['download_url'] + output_file = f"sci_volume_data/{dataset_id}/data/{dataset['filename']}" + + if not os.path.exists(output_file): + print(f" Downloading {dataset['filename']} ({dataset['size_str']})...") + result = subprocess.run(['curl', '-o', output_file, url], capture_output=True) + if result.returncode != 0: + print(f" ERROR downloading {dataset_id}") + continue + time.sleep(0.5) # Be nice to the server + else: + print(f" File already exists, skipping download") + + # Create metadata file + metadata_file = f"sci_volume_data/{dataset_id}/data/{dataset_id}.txt" + with open(metadata_file, 'w') as f: + f.write(f"{dataset['name']}\n") + f.write(f"Description: {dataset['description']}\n") + f.write(f"Data Type: {dataset['data_type']}\n") + f.write(f"Data Byte Order: little Endian\n") + f.write(f"Data Spacing: {dataset['spacing']}\n") + f.write(f"Data Extent: {dataset['dimensions']}\n") + + print(f" Created directory structure and metadata") + +print(f"\nCompleted processing {len(datasets_to_download)} datasets") + +# Download BBBC012 dataset for napari_mcp_evals +print(f"\nDownloading BBBC012 dataset for napari_mcp_evals...") + +# Create the napari_mcp_evals/data directory +napari_data_dir = "napari_mcp_evals/data" +os.makedirs(napari_data_dir, exist_ok=True) + +# Download the BBBC012 dataset +bbbc_url = "https://data.broadinstitute.org/bbbc/BBBC012/BBBC012_v1_images.zip" +bbbc_zip_file = os.path.join(napari_data_dir, "BBBC012_v1_images.zip") + +if not os.path.exists(os.path.join(napari_data_dir, "BBBC012_v1_images")): + print(f" Downloading BBBC012_v1_images.zip...") + result = subprocess.run(['curl', '-o', bbbc_zip_file, bbbc_url], capture_output=True) + + if result.returncode == 0: + print(f" Download completed successfully") + + # Unzip the file + print(f" Extracting BBBC012_v1_images.zip...") + try: + with zipfile.ZipFile(bbbc_zip_file, 'r') as zip_ref: + zip_ref.extractall(napari_data_dir) + print(f" Extraction completed") + + # Delete the zip file + os.remove(bbbc_zip_file) + print(f" Cleaned up zip file") + + except zipfile.BadZipFile: + print(f" ERROR: Downloaded file is not a valid zip archive") + except Exception as e: + print(f" ERROR during extraction: {e}") + else: + print(f" ERROR downloading BBBC012 dataset") +else: + print(f" BBBC012 dataset already exists, skipping download") + +print(f"\nAll processing completed!") \ No newline at end of file diff --git a/main/bonsai/.DS_Store b/main/bonsai/.DS_Store new file mode 100644 index 0000000000000000000000000000000000000000..454d22d260928d831936a649e3f9e79f177b705a Binary files /dev/null and b/main/bonsai/.DS_Store differ diff --git a/main/bonsai/GS/.DS_Store b/main/bonsai/GS/.DS_Store new file mode 100644 index 0000000000000000000000000000000000000000..5008ddfcf53c02e82d7eee2e57c38e5672ef89f6 Binary files /dev/null and b/main/bonsai/GS/.DS_Store differ diff --git a/main/bonsai/GS/bonsai_gs.pvsm b/main/bonsai/GS/bonsai_gs.pvsm new file mode 100644 index 0000000000000000000000000000000000000000..ac26f52d295649eb9853ed29fb0e6e5155b249f0 --- /dev/null +++ b/main/bonsai/GS/bonsai_gs.pvsm @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:aa03b91b2e71ffbfef15c26dce85b93887ee4ffd1461759c11a46f640f604ac0 +size 235372 diff --git a/main/bonsai/GS/bonsai_gs.py b/main/bonsai/GS/bonsai_gs.py new file mode 100644 index 0000000000000000000000000000000000000000..077ec38b1350b01dbe70b07c9af99963765cc12e --- /dev/null +++ b/main/bonsai/GS/bonsai_gs.py @@ -0,0 +1,87 @@ +#!/usr/bin/env pvpython + +import os +from paraview.simple import * + +def create_bonsai_visualization(): + # — Paths & setup — + base = os.path.abspath(os.path.join(__file__, '..', '..')) + raw_file = os.path.join(base, 'data', 'bonsai_256x256x256_uint8.raw') + state_dir = os.path.join(base, 'results', 'pvpython_state') + state = os.path.join(state_dir, 'bonsai.pvsm') + os.makedirs(state_dir, exist_ok=True) + if not os.path.isfile(raw_file): + raise FileNotFoundError(f"Missing raw: {raw_file}") + + # — 1) Load the RAW image — + reader = ImageReader(FileNames=[raw_file]) + reader.DataScalarType = 'unsigned char' + reader.DataByteOrder = 'LittleEndian' + reader.DataExtent = [0, 255, 0, 255, 0, 255] + reader.DataSpacing = [1.0, 1.0, 1.0] + reader.FileDimensionality = 3 + reader.UpdatePipeline() + + # — 2) Volume render setup — + view = GetActiveViewOrCreate('RenderView') + view.BackgroundColorMode = 'Single Color' + view.Background = [1, 1, 1] + + disp = Show(reader, view) + disp.SetRepresentationType('Volume') + disp.ColorArrayName = ['POINTS', 'ImageFile'] + view.ResetCamera() + + # — 3) Transfer functions from extracted GS state — + ctf = GetColorTransferFunction('ImageFile') + ctf.ColorSpace = 'RGB' + ctf.NumberOfTableValues = 1024 + ctf.RGBPoints = [ + 0.000, 0.780, 0.522, 0.000, + 37.564, 0.847, 0.565, 0.000, + 61.402, 0.796, 0.757, 0.722, + 88.853, 0.753, 0.753, 0.753, + 118.470, 0.804, 0.737, 0.694, + 129.306, 0.686, 0.357, 0.047, + 156.756, 0.678, 0.345, 0.024, + 239.108, 0.667, 0.333, 0.000, + 255.000, 0.706, 0.016, 0.149 + ] + + otf = GetOpacityTransferFunction('ImageFile') + otf.Points = [ + 0.000, 0.000, 0.5, 0.0, + 32.507, 0.000, 0.5, 0.0, + 32.507, 0.360, 0.5, 0.0, + 39.731, 0.455, 0.5, 0.0, + 41.176, 0.000, 0.5, 0.0, + 63.569, 0.000, 0.5, 0.0, + 63.569, 0.511, 0.5, 0.0, + 89.575, 0.412, 0.5, 0.0, + 100.411, 0.000, 0.5, 0.0, + 163.980, 0.002, 0.5, 0.0, + 163.980, 0.567, 0.5, 0.0, + 231.161, 0.649, 0.5, 0.0, + 241.275, 0.433, 0.5, 0.0, + 255.000, 1.000, 0.5, 0.0 + ] + + disp.LookupTable = ctf + disp.ScalarOpacityFunction = otf + + # — 4) Camera & save — + cam = view.GetActiveCamera() + cam.SetPosition(400, 350, 450) + cam.SetFocalPoint(128, 128, 128) + cam.SetViewUp(1, 0, 0) + view.ResetCamera() + cam.Elevation(15) + cam.Azimuth(30) + cam.Zoom(1.0) + + view.StillRender() + SaveState(state) + print(f"[✔] Saved gold‑standard PVSM:\n {state}") + +if __name__ == '__main__': + create_bonsai_visualization() diff --git a/main/bonsai/GS/gs_diagonal_view.png b/main/bonsai/GS/gs_diagonal_view.png new file mode 100644 index 0000000000000000000000000000000000000000..f1d944ae5214517a458ecb2fab9ac68a379a27c0 --- /dev/null +++ b/main/bonsai/GS/gs_diagonal_view.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f5deab399f55098d4f79d2ca799513e730e0d80cd5954c6d4b36bcdb1553fd3a +size 589653 diff --git a/main/bonsai/GS/gs_front_view.png b/main/bonsai/GS/gs_front_view.png new file mode 100644 index 0000000000000000000000000000000000000000..f118f43bd6943bfb55c2146b8d19e96b28cdcc3c --- /dev/null +++ b/main/bonsai/GS/gs_front_view.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:205de6ffecfe0ac97b9a05cb4e0e48b542e05a622d8e31766fe929617ad39097 +size 350780 diff --git a/main/bonsai/GS/gs_side_view.png b/main/bonsai/GS/gs_side_view.png new file mode 100644 index 0000000000000000000000000000000000000000..1f13b925b7fce310f9a887690a832baae1c82ecb --- /dev/null +++ b/main/bonsai/GS/gs_side_view.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ba458a49d0e1bba5fa6267b5582e314be755a633892a18d9d61bcec781c4dbaf +size 477066 diff --git a/main/bonsai/data/bonsai.txt b/main/bonsai/data/bonsai.txt new file mode 100644 index 0000000000000000000000000000000000000000..5438defe3ff97bdb1225fa1205948831a1f1e998 --- /dev/null +++ b/main/bonsai/data/bonsai.txt @@ -0,0 +1,5 @@ +Bonsai (Scalar) +Data Scalar Type: unsigned char +Data Byte Order: little Endian +Data Spacing: 1x1x1 +Data Extent: 256x256x256 \ No newline at end of file diff --git a/main/bonsai/data/bonsai_256x256x256_uint8.raw b/main/bonsai/data/bonsai_256x256x256_uint8.raw new file mode 100644 index 0000000000000000000000000000000000000000..deaf734a2b583be6f17fdfead0bfc2551c0a923c --- /dev/null +++ b/main/bonsai/data/bonsai_256x256x256_uint8.raw @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:288d2818cffb50d2e2c9e3b5b0a2520d09b0f5051bb63cca315f3fc7a894af80 +size 16777216 diff --git a/main/bonsai/task_description.txt b/main/bonsai/task_description.txt new file mode 100644 index 0000000000000000000000000000000000000000..877165cccfcaa43edab8f0c41e9cff7fa4af3822 --- /dev/null +++ b/main/bonsai/task_description.txt @@ -0,0 +1,14 @@ +Task: + +Load the bonsai dataset from "bonsai/data/bonsai_256x256x256_uint8.raw", the information about this dataset: +Bonsai (Scalar) +Data Scalar Type: unsigned char +Data Byte Order: little Endian +Data Spacing: 1x1x1 +Data Extent: 256x256x256 + +Then visualize it with volume rendering, modify the transfer function and reach the visualization goal as: "A potted tree with brown pot silver branch and golden leaves." + +Please think step by step and make sure to fulfill all the visualization goals mentioned above. + +Finally, save the paraview state as "bonsai/results/{agent_mode}/bonsai.pvsm" \ No newline at end of file diff --git a/main/bonsai/visualization_goals.txt b/main/bonsai/visualization_goals.txt new file mode 100644 index 0000000000000000000000000000000000000000..b5e0aee709559d873be862802839a4dc723adfbc --- /dev/null +++ b/main/bonsai/visualization_goals.txt @@ -0,0 +1,7 @@ +1. Overall Visualization Goal: How well does the result achieve the overall goal of showing a potted tree with the specified colors? + +2. Brown Pot Visualization: Does the result show the pot portion in brown color? + +3. Silver Branch Visualization: Does the result show the branch/trunk portion in silver color? + +4. Golden Leaves Visualization: Does the result show the leaves portion in golden color? \ No newline at end of file diff --git a/main/carp/GS/answers.txt b/main/carp/GS/answers.txt new file mode 100644 index 0000000000000000000000000000000000000000..a7767b4beb130bedcefdd9c93a78beb6b9da5767 --- /dev/null +++ b/main/carp/GS/answers.txt @@ -0,0 +1,13 @@ +Q1: How many distinct fins are visible in the carp skeleton? List their anatomical names and corresponding counts. + +A1: There are 7 distinct fins in total. They are: +1 Dorsal fin (on the top, mid-back region) +2 Pectoral fins (one on each side, near the skull) +2 Pelvic fins (smaller, paired fins on the underside, behind pectorals) +1 Anal fin (on the underside, near the tail) +1 Caudal fin (tail fin, fan-shaped) + + +Q2: Estimate the ratio of skull length to overall body length, based on the visualization. + +A2: Skull length ≈ 20-22% of overall body length. The ratio of skull length to overall body length is approximately 1:4.5 \ No newline at end of file diff --git a/main/carp/GS/carp_gs.pvsm b/main/carp/GS/carp_gs.pvsm new file mode 100644 index 0000000000000000000000000000000000000000..8f96a832c3678b959fb0a4682ccbc9c0583a9e4f --- /dev/null +++ b/main/carp/GS/carp_gs.pvsm @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9b6b1f8ef7f14cc0939313dece7a3f73725efa29f0b3242827a3205ee27d5d55 +size 234004 diff --git a/main/carp/GS/gs_diagonal_view.png b/main/carp/GS/gs_diagonal_view.png new file mode 100644 index 0000000000000000000000000000000000000000..748431ac47f997c983b80115594db4b8c2fd2770 --- /dev/null +++ b/main/carp/GS/gs_diagonal_view.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e3187f1a9c69d4d603694861a2182d0a41bbf516675d535a0c460e5b6375ee20 +size 105726 diff --git a/main/carp/GS/gs_front_view.png b/main/carp/GS/gs_front_view.png new file mode 100644 index 0000000000000000000000000000000000000000..9f7dde44733bf444f583b5bf64037379c4c6e219 --- /dev/null +++ b/main/carp/GS/gs_front_view.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:196703c0863f4ed986398f2f66d71ed9c21fa2f55915f400a483cbb8cfb3e3b3 +size 39051 diff --git a/main/carp/GS/gs_side_view.png b/main/carp/GS/gs_side_view.png new file mode 100644 index 0000000000000000000000000000000000000000..de265668880f90ed07b5203eafddfa5253eb436f --- /dev/null +++ b/main/carp/GS/gs_side_view.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3c2abebbde4f8a5188b3350afd0c219ad503ae9a01af4b39ce7f90ff1041ca97 +size 90260 diff --git a/main/carp/data/carp.txt b/main/carp/data/carp.txt new file mode 100644 index 0000000000000000000000000000000000000000..515115376645cadd80d9465ddb4b0c0cca5bd171 --- /dev/null +++ b/main/carp/data/carp.txt @@ -0,0 +1,5 @@ +Carp (Scalar) +Data Scalar Type: unsigned short +Data Byte Order: little Endian +Data Spacing: 0.78125x0.390625x1 +Data Extent: 256x256x512 \ No newline at end of file diff --git a/main/carp/data/carp_256x256x512_uint16.raw b/main/carp/data/carp_256x256x512_uint16.raw new file mode 100644 index 0000000000000000000000000000000000000000..0ccffa82bd189a242a4fc9d60cc61c0fe87ae8b6 --- /dev/null +++ b/main/carp/data/carp_256x256x512_uint16.raw @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0744e565584393e8e2e561af60d128d68426832faade48d70c2a49df23907f38 +size 67108864 diff --git a/main/carp/task_description.txt b/main/carp/task_description.txt new file mode 100644 index 0000000000000000000000000000000000000000..e0e21920e40d386c3fb6abf33443272faae927b8 --- /dev/null +++ b/main/carp/task_description.txt @@ -0,0 +1,26 @@ +Task: + +Load the carp dataset from "carp/data/carp_256x256x512_uint16.raw", the information about this dataset: +Carp (Scalar) +Data Scalar Type: unsigned short +Data Byte Order: little Endian +Data Spacing: 0.78125x0.390625x1 +Data Extent: 256x256x512 + +Instructions: + +1. Load the dataset into ParaView. + +2. Apply volume rendering to visualize the carp skeleton. + +3. Adjust the transfer function to highlight only the bony structures in an X-ray style (suppressing soft tissue). + +4. Optimize the viewpoint to display the full skeleton, ensuring the head, spine, and fins are all clearly visible in a single frame. + +5. Analyze the visualization and answer the following questions: +Q1: How many distinct fins are visible in the carp skeleton? List their anatomical names and corresponding counts. +Q2: Estimate the ratio of skull length to overall body length, based on the visualization. + +6. Save your work: +Save the ParaView state as "carp/results/{agent_mode}/carp.pvsm". +Save the answers to the analysis questions in plain text as "carp/results/{agent_mode}/answers.txt". \ No newline at end of file diff --git a/main/carp/visualization_goals.txt b/main/carp/visualization_goals.txt new file mode 100644 index 0000000000000000000000000000000000000000..1d6b01bd3b03e50bee43c0d1709a7c0b9d4478df --- /dev/null +++ b/main/carp/visualization_goals.txt @@ -0,0 +1,7 @@ +1. Bone Isolation: Are the bones clearly visible while soft tissue and background are suppressed? Thin fin rays should be distinguishable without major loss. + +2. Viewpoint Selection: Does the chosen viewpoint display the entire carp skeleton (head, spine, ribs, fins, tail) without critical occlusion? + +3. X-ray Appearance: Does the visualization resemble an X-ray (monochrome or grayscale, transparent look, consistent lighting)? + +4. Correct Data Setup: Was the dataset loaded with proper spacing (0.78125 × 0.390625 × 1.0)? The carp skeleton should appear in its correct proportions without distortion (i.e., the fish shape looks anatomically normal). \ No newline at end of file diff --git a/main/chameleon/GS/chameleon.pvsm b/main/chameleon/GS/chameleon.pvsm new file mode 100644 index 0000000000000000000000000000000000000000..7d6714ec5ab8a064d18ef98cd9d13a6cd0813d83 --- /dev/null +++ b/main/chameleon/GS/chameleon.pvsm @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9f19cabc7c0a6fd449d07b11fd99e45838aeacdc11ce082d010fd522b3151a4d +size 248359 diff --git a/main/chameleon/GS/gs_diagonal_view.png b/main/chameleon/GS/gs_diagonal_view.png new file mode 100644 index 0000000000000000000000000000000000000000..cabe27c5ee62fe756b978cfae2a42b551c376dc5 --- /dev/null +++ b/main/chameleon/GS/gs_diagonal_view.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5a501d057b7ebc55758d2c9cd235fd9612c616ac349ddf4353c5acddcf794dad +size 238962 diff --git a/main/chameleon/GS/gs_front_view.png b/main/chameleon/GS/gs_front_view.png new file mode 100644 index 0000000000000000000000000000000000000000..d726170b985832b13db3b1b360a13b19eead85d8 --- /dev/null +++ b/main/chameleon/GS/gs_front_view.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ce6b385fe4af6f0377168bbd4569753f7cff98edbf936f41f35efe40115ecd5d +size 301162 diff --git a/main/chameleon/GS/gs_side_view.png b/main/chameleon/GS/gs_side_view.png new file mode 100644 index 0000000000000000000000000000000000000000..73f8e479a25f84dd798e1b7e4a7a4c8022f9268c --- /dev/null +++ b/main/chameleon/GS/gs_side_view.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1b51228a266cf780126c91afd5de6f8990e442503a2d78d884f23d8a7c63acd0 +size 131681 diff --git a/main/chameleon/data/chameleon.txt b/main/chameleon/data/chameleon.txt new file mode 100644 index 0000000000000000000000000000000000000000..012f111f7ccd4288866d1b972b669396ba101765 --- /dev/null +++ b/main/chameleon/data/chameleon.txt @@ -0,0 +1,5 @@ +chameleon (Scalar) +Data Scalar Type: float +Data Byte Order: little Endian +Data Extent: 256x256x270 +Number of Scalar Components: 1 diff --git a/main/chameleon/data/chameleon_256x256x270_float32.raw b/main/chameleon/data/chameleon_256x256x270_float32.raw new file mode 100644 index 0000000000000000000000000000000000000000..db90cbd9f6294c937eeb9b6c5d78beafb039d1ec --- /dev/null +++ b/main/chameleon/data/chameleon_256x256x270_float32.raw @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:bff2380bd60471fedc7e67d016387d08073dbe119733944b9cbb8c6196201cc4 +size 70778880 diff --git a/main/chameleon/task_description.txt b/main/chameleon/task_description.txt new file mode 100644 index 0000000000000000000000000000000000000000..672a08926c5cf7121224333000561ea157f0950f --- /dev/null +++ b/main/chameleon/task_description.txt @@ -0,0 +1,19 @@ +Task: + +Load the chameleon dataset from "chameleon/data/chameleon_256x256x270_float32.raw", the information about this dataset: +chameleon (Scalar) +Data Scalar Type: float +Data Byte Order: little Endian +Data Extent: 256x256x270 +Number of Scalar Components: 1 +Data loading is very important, make sure you correctly load the dataset according to their features. + +Apply the volume rendering to visualize the chameleon dataset + +Adjust the transfer function to highlight the bony structures and skin in an X-ray style. + +Adjust the camera position and focus on the head part of the chameleon + +Please think step by step and make sure to fulfill all the visualization goals mentioned above. + +Finally, save the paraview state as "chameleon/results/{agent_mode}/chameleon.pvsm" diff --git a/main/chameleon/visualization_goals.txt b/main/chameleon/visualization_goals.txt new file mode 100644 index 0000000000000000000000000000000000000000..3d5279fce82259300815eee2d7588cda352b9d20 --- /dev/null +++ b/main/chameleon/visualization_goals.txt @@ -0,0 +1,7 @@ +1. Overall Visualization Goal: Does the result present a clean, X-ray–style volume rendering where the chameleon’s bony structures are clearly emphasized and soft tissue is faint but discernible? + +2. Data Loading Correctness: Is the RAW volume loaded with the specified metadata (float32, little-endian, 256×256×270, 1 component) so that the histogram looks reasonable and the anatomy is not flipped or distorted? + +3. Transfer Function Quality: Does the grayscale transfer function make low-intensity tissue mostly transparent while assigning higher opacity to bones/skin ridges, yielding good depth cues without over-saturation or banding? + +4. Camera & Framing: Is the camera positioned and zoomed to focus on the chameleon’s head, keeping it sharply framed (no clipping), with a stable viewpoint that highlights key anatomical details? diff --git a/main/engine/GS/engine.pvsm b/main/engine/GS/engine.pvsm new file mode 100644 index 0000000000000000000000000000000000000000..4deb835f39fdddfb68339183e87314ac41219114 --- /dev/null +++ b/main/engine/GS/engine.pvsm @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:57d77bac59a328eb14ed6e2eed84a20ad93582462278d4e30301b8a342dc7045 +size 259002 diff --git a/main/engine/GS/gs_diagonal_view.png b/main/engine/GS/gs_diagonal_view.png new file mode 100644 index 0000000000000000000000000000000000000000..7049818112f619e5fa3a13baa4eed8bdcc211d49 --- /dev/null +++ b/main/engine/GS/gs_diagonal_view.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9f6faa5b25cb0ab6f6e7fc6c56758477e735ef21319ef893d9a603009117d2b0 +size 696438 diff --git a/main/engine/GS/gs_front_view.png b/main/engine/GS/gs_front_view.png new file mode 100644 index 0000000000000000000000000000000000000000..24088f6a7d58dad28a262651c8ea8f4b8b95d372 --- /dev/null +++ b/main/engine/GS/gs_front_view.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5d1d9d3a7bfa0800461a7ac286c40f76cc381c2a62972657e27dd301087cb7c7 +size 604972 diff --git a/main/engine/GS/gs_side_view.png b/main/engine/GS/gs_side_view.png new file mode 100644 index 0000000000000000000000000000000000000000..8f264a082b5e6dee742c503bd9e41c8a051ccfb3 --- /dev/null +++ b/main/engine/GS/gs_side_view.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:94e7b786710075226251199644ba8555fcffda7ad9c31786bf40ea795dd222fa +size 803341 diff --git a/main/engine/data/engine.txt b/main/engine/data/engine.txt new file mode 100644 index 0000000000000000000000000000000000000000..17271754e31540005e0d5774cd4dcfd536e53371 --- /dev/null +++ b/main/engine/data/engine.txt @@ -0,0 +1,5 @@ +engine (Scalar) +Data Scalar Type: float +Data Byte Order: little Endian +Data Extent: 256x256x128 +Number of Scalar Components: 1 diff --git a/main/engine/data/engine_256x256x128_uint8.raw b/main/engine/data/engine_256x256x128_uint8.raw new file mode 100644 index 0000000000000000000000000000000000000000..4dfc9391c1fc389fb6b9921a241a8f2e9b254269 --- /dev/null +++ b/main/engine/data/engine_256x256x128_uint8.raw @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:15cf29cdde69cd6c8b5ab9eeef4d1bb21f276056da4111cb5069ebfbe5b6ff90 +size 8388608 diff --git a/main/engine/task_description.txt b/main/engine/task_description.txt new file mode 100644 index 0000000000000000000000000000000000000000..fc51b744b432c0b5b4011bcb3b2b9414e35178ad --- /dev/null +++ b/main/engine/task_description.txt @@ -0,0 +1,19 @@ +Task: + +Load the vortex dataset from "engine/data/engine_256x256x128_uint8.raw", the information about this dataset: +engine (Scalar) +Data Scalar Type: float +Data Byte Order: little Endian +Data Extent: 256x256x128 +Number of Scalar Components: 1 + +Instructions: + +1. Load the dataset into ParaView. + +2. Apply the volume rendering to visualize the engine dataset + +3. Adjust the transfer function, let the outer part more transparent and the inner part more solid. Use light blue for the outer part and orange for the inner part. + +4. Save your work: +Save the ParaView state as "engine/results/{agent_mode}/engine.pvsm". diff --git a/main/engine/visualization_goals.txt b/main/engine/visualization_goals.txt new file mode 100644 index 0000000000000000000000000000000000000000..e2ab34836b1e65178464d2f038bfda9373c7cb9f --- /dev/null +++ b/main/engine/visualization_goals.txt @@ -0,0 +1,7 @@ +1. Overall Visualization Goal: How well does the result use volume rendering to clearly present the internal and external structures of the engine dataset? + +2. Structural Clarity: Does the visualization emphasize depth so that the outer layers do not obscure the inner structures? + +3. Transfer Function Transparency: Is the outer region rendered with higher transparency and the inner region more solid, achieving a clear layering effect? + +4. Transfer Function Color Mapping: Are colors correctly assigned so that the outer part is light blue and the inner part is orange, enhancing structural contrast? diff --git a/main/foot/GS/foot_gs.pvsm b/main/foot/GS/foot_gs.pvsm new file mode 100644 index 0000000000000000000000000000000000000000..300655a46f367c99955f61f013537e2c905adc22 --- /dev/null +++ b/main/foot/GS/foot_gs.pvsm @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8f75a4a23f18a31173f1dbeca7f27cd6c4e384e533c4c3f14f190f5577c394b2 +size 211051 diff --git a/main/foot/data/foot.txt b/main/foot/data/foot.txt new file mode 100644 index 0000000000000000000000000000000000000000..73e39fe1e35957500020ce860fea749f75ffe193 --- /dev/null +++ b/main/foot/data/foot.txt @@ -0,0 +1,6 @@ +Foot +Description: Rotational C-arm x-ray scan of a human foot. Tissue and bone are present in the dataset. +Data Type: uint8 +Data Byte Order: little Endian +Data Spacing: 1x1x1 +Data Extent: 256x256x256 diff --git a/main/foot/data/foot_256x256x256_uint8.raw b/main/foot/data/foot_256x256x256_uint8.raw new file mode 100644 index 0000000000000000000000000000000000000000..863128755f91a0fbc66d4d621ecf1f912a97e34b --- /dev/null +++ b/main/foot/data/foot_256x256x256_uint8.raw @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e5b296aa009eb1e84316bc8694dd4ec196f0ad9e1442fb4f08cf44c70e3b7dc4 +size 16777216 diff --git a/main/foot/task_description.txt b/main/foot/task_description.txt new file mode 100644 index 0000000000000000000000000000000000000000..79bd1512689cf0d1a90a9d5ab08bd93739084b4e --- /dev/null +++ b/main/foot/task_description.txt @@ -0,0 +1,20 @@ +Task: + +Load the Foot dataset from "foot/data/foot_256x256x256_uint8.raw", the information about this dataset: +Foot +Description: Rotational C-arm x-ray scan of a human foot. Tissue and bone are present in the dataset. +Data Type: uint8 +Data Byte Order: little Endian +Data Spacing: 1x1x1 +Data Extent: 256x256x256 +Data loading is very important, make sure you correctly load the dataset according to their features. + +Visualize the anatomical structures: +1. Apply volume rendering with an X-ray transfer function that distinguishes soft tissues and bones. Bones with darker color, and soft tissue with lighter color. + +2. Analyze the visualization and answer the following questions: +Q1: In the visualization, which structures are fully visible: the phalanges, the metatarsals, or both? + +3. Save your work: +Save the ParaView state as "foot/results/{agent_mode}/foot.pvsm". +Save the answers to the analysis questions in plain text as "foot/results/{agent_mode}/answers.txt". \ No newline at end of file diff --git a/main/foot/visualization_goals.txt b/main/foot/visualization_goals.txt new file mode 100644 index 0000000000000000000000000000000000000000..558dc9bb19616fbef6beb0af0f6f484680616bee --- /dev/null +++ b/main/foot/visualization_goals.txt @@ -0,0 +1,3 @@ +1. Overall Goal: Does the visualization effectively distinguish between different tissue types in the foot dataset? + +2. QA: The phalanges (toe bones) are clearly visible in full, but the metatarsals are only partially visible. \ No newline at end of file diff --git a/main/get_tf_from_gs.py b/main/get_tf_from_gs.py new file mode 100644 index 0000000000000000000000000000000000000000..4801f183427c8dab24c1d00d5ee6a54895fa2fd4 --- /dev/null +++ b/main/get_tf_from_gs.py @@ -0,0 +1,44 @@ +#!/usr/bin/env pvpython + +from paraview.simple import * +import os + +def load_state_and_inspect_transfer_functions(state_path): + if not os.path.isfile(state_path): + raise FileNotFoundError(f"State file not found: {state_path}") + + # Load the ParaView state file + LoadState(state_path) + print(f"[✔] Loaded state from: {state_path}") + + # Access the active view + view = GetActiveViewOrCreate('RenderView') + + # Iterate through all sources to find volume display + for source in GetSources().values(): + display = GetDisplayProperties(source, view=view) + if hasattr(display, 'LookupTable') and display.LookupTable: + lut = display.LookupTable + otf = display.ScalarOpacityFunction + + print("\n🎨 Color Transfer Function (LUT) — RGBPoints:") + for i in range(0, len(lut.RGBPoints), 4): + val = lut.RGBPoints[i] + r, g, b = lut.RGBPoints[i+1:i+4] + print(f" {val:.3f}: R={r:.3f}, G={g:.3f}, B={b:.3f}") + + print("\n🌫️ Opacity Transfer Function (OTF) — Points:") + for i in range(0, len(otf.Points), 4): + val = otf.Points[i] + alpha = otf.Points[i+1] + print(f" {val:.3f}: α={alpha:.3f}") + + return lut, otf + + print("❌ No volume rendering transfer function found.") + return None, None + +if __name__ == '__main__': + base_path = os.path.abspath(os.path.join(__file__, '..')) + state_file = os.path.join(base_path, 'bonsai_gs.pvsm') + load_state_and_inspect_transfer_functions(state_file) diff --git a/main/lobster/GS/lobster_gs.pvsm b/main/lobster/GS/lobster_gs.pvsm new file mode 100644 index 0000000000000000000000000000000000000000..537a1cbcee4c774d5abd5892a1b4b0fcc497392b --- /dev/null +++ b/main/lobster/GS/lobster_gs.pvsm @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4850853e90dc12c5b72c08a48dde7972da869ed6a1ffca9b43fae04e85618057 +size 378290 diff --git a/main/lobster/data/lobster.txt b/main/lobster/data/lobster.txt new file mode 100644 index 0000000000000000000000000000000000000000..a6c6fecfd29fc795bb3974fd97d3e58dd74db614 --- /dev/null +++ b/main/lobster/data/lobster.txt @@ -0,0 +1,6 @@ +Lobster +Description: CT scan of a lobster contained in a block of resin. +Data Type: uint8 +Data Byte Order: little Endian +Data Spacing: 1x1x1.4 +Data Extent: 301x324x56 diff --git a/main/lobster/data/lobster_301x324x56_uint8.raw b/main/lobster/data/lobster_301x324x56_uint8.raw new file mode 100644 index 0000000000000000000000000000000000000000..b5cd7cf75040659a747943fe71c6ebcb1a2ac436 --- /dev/null +++ b/main/lobster/data/lobster_301x324x56_uint8.raw @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:838de17d5bbca0134e56831e311d3721df3c2d8c3f9d4275122386f8fa0e506a +size 5461344 diff --git a/main/lobster/task_description.txt b/main/lobster/task_description.txt new file mode 100644 index 0000000000000000000000000000000000000000..2c529c9d831c9df0c7abad608f59e8454c453d73 --- /dev/null +++ b/main/lobster/task_description.txt @@ -0,0 +1,22 @@ +Task: + +Load the Lobster dataset from "lobster/data/lobster_301x324x56_uint8.raw", the information about this dataset: +Lobster +Description: CT scan of a lobster contained in a block of resin. +Data Type: uint8 +Data Byte Order: little Endian +Data Spacing: 1x1x1.4 +Data Extent: 301x324x56 +Data loading is very important, make sure you correctly load the dataset according to their features. + +Visualize the scanned specimen: +1. Create an isosurface at the specimen boundary, find a proper isovalue to show the whole structure. + +2. Use natural colors appropriate for the specimen (red-orange for lobster) + +3. Analyze the visualization and answer the following questions: +Q1: How many walking legs does the lobster in the visualization have? + +4. Save your work: +Save the ParaView state as "lobster/results/{agent_mode}/lobster.pvsm". +Save the answers to the analysis questions in plain text as "lobster/results/{agent_mode}/answers.txt". \ No newline at end of file diff --git a/main/lobster/visualization_goals.txt b/main/lobster/visualization_goals.txt new file mode 100644 index 0000000000000000000000000000000000000000..215a90c16cd7135ad3d954e4bd5ee02da5c34df4 --- /dev/null +++ b/main/lobster/visualization_goals.txt @@ -0,0 +1,7 @@ +1. Overall Goal: Does the visualization clearly show the structure and details of the Lobster? + +2. Boundary Clearity: Are surface details and boundaries of the lobster well-defined? + +3. Correct Color: Is the color of the lobster mimic a real one? (red-orange) + +4. QA: The lobster only has 7 walking legs, one of them on the right side of its body is missing. \ No newline at end of file diff --git a/main/screenshot_gs.sh b/main/screenshot_gs.sh new file mode 100644 index 0000000000000000000000000000000000000000..478874883c30e98747ceccfe05ae801f70f2718a --- /dev/null +++ b/main/screenshot_gs.sh @@ -0,0 +1,9 @@ +# !/bin/bash +# Example usage of screenshot_helper.py for the carp dataset, +# which generates three views (front, side, diagonal) of screenshots for a given ParaView state file +pvpython screenshot_helper.py --gs_state_path carp/GS/carp_gs.pvsm \ + --gs_img_path carp/GS \ + --data_directory carp/data \ + # data_directory is needed to load the data files correctly, otherwise use the data path in the state file + # --result_state_path carp/results/carp_result.pvsm \ + # --result_img_path carp/results \ \ No newline at end of file diff --git a/main/screenshot_helper.py b/main/screenshot_helper.py new file mode 100644 index 0000000000000000000000000000000000000000..406493e378ceb61762e36bfe74fca68024e7ab02 --- /dev/null +++ b/main/screenshot_helper.py @@ -0,0 +1,120 @@ +""" +Helper script for taking screenshots from ParaView state files +""" +from paraview.simple import * +import os +import math + +def take_screenshots_from_state(state_path, output_dir, prefix="", data_directory=None): + """ + Load a ParaView state file and take 3 screenshots from different angles + + Args: + state_path (str): Path to the .pvsm state file + output_dir (str): Directory to save screenshots + prefix (str): Prefix for screenshot filenames + data_path (str): Directory of raw data file for state file + + Returns: + list: List of screenshot file paths + """ + if not os.path.exists(state_path): + raise FileNotFoundError(f"State file not found: {state_path}") + + # Load state file + if data_directory: + LoadState(state_path, data_directory=data_directory) + else: + LoadState(state_path) + + # Create output directory + os.makedirs(output_dir, exist_ok=True) + + # Get the active view + renderView = GetActiveViewOrCreate('RenderView') + + # Reset camera to fit all data + renderView.ResetCamera() + + # Get current camera position for reference + camera = renderView.GetActiveCamera() + original_position = camera.GetPosition() + original_focal_point = camera.GetFocalPoint() + + # Calculate distance from focal point to position + distance = math.sqrt(sum([(original_position[i] - original_focal_point[i])**2 for i in range(3)])) + + # Define three different camera angles + angles = [ + { + 'name': 'front', + 'position': [original_focal_point[0], original_focal_point[1], original_focal_point[2] + distance], + 'up': [0, 1, 0] + }, + { + 'name': 'side', + 'position': [original_focal_point[0] + distance, original_focal_point[1], original_focal_point[2]], + 'up': [0, 0, 1] + }, + { + 'name': 'diagonal', + 'position': [original_focal_point[0] + distance*0.7, original_focal_point[1] + distance*0.7, original_focal_point[2] + distance*0.7], + 'up': [0, 0, 1] + } + ] + + screenshot_paths = [] + + # Take screenshots from different angles + for angle in angles: + # Set camera position + camera.SetPosition(angle['position']) + camera.SetFocalPoint(original_focal_point) + camera.SetViewUp(angle['up']) + + # Reset camera to ensure proper framing + renderView.ResetCamera() + + # Render the view + Render() + + # Save screenshot + filename = f"{prefix}{angle['name']}_view.png" if prefix else f"{angle['name']}_view.png" + screenshot_path = os.path.join(output_dir, filename) + SaveScreenshot(screenshot_path, renderView, ImageResolution=[1920, 1080]) + screenshot_paths.append(screenshot_path) + + return screenshot_paths + +def main(): + import argparse + parser = argparse.ArgumentParser(description="Take 3 screenshots from ParaView state files.") + parser.add_argument('--gs_state_path', type=str, help='Path to ground truth state file (.pvsm)') + parser.add_argument('--gs_img_path', type=str, help='Directory to save ground truth screenshots') + parser.add_argument('--result_state_path', type=str, help='Path to result state file (.pvsm)') + parser.add_argument('--result_img_path', type=str, help='Directory to save result screenshots') + parser.add_argument('--data_directory', type=str, default=None, help='Directory containing raw data files (optional)') + args = parser.parse_args() + + # Validate argument pairs + if args.gs_state_path and not args.gs_img_path: + parser.error('If --gs_state_path is provided, --gs_img_path must also be provided.') + if args.gs_img_path and not args.gs_state_path: + parser.error('If --gs_img_path is provided, --gs_state_path must also be provided.') + if args.result_state_path and not args.result_img_path: + parser.error('If --result_state_path is provided, --result_img_path must also be provided.') + if args.result_img_path and not args.result_state_path: + parser.error('If --result_img_path is provided, --result_state_path must also be provided.') + + if args.gs_state_path and args.gs_img_path: + os.makedirs(args.gs_img_path, exist_ok=True) + print(f"Taking screenshots for ground truth: {args.gs_state_path} -> {args.gs_img_path}") + take_screenshots_from_state(args.gs_state_path, args.gs_img_path, prefix="gs_", data_directory=args.data_directory) + + if args.result_state_path and args.result_img_path: + os.makedirs(args.result_img_path, exist_ok=True) + print(f"Taking screenshots for result: {args.result_state_path} -> {args.result_img_path}") + take_screenshots_from_state(args.result_state_path, args.result_img_path, prefix="result_", data_directory=args.data_directory) + +if __name__ == "__main__": + main() diff --git a/main/solar-plume/GS/gs_diagonal_view.png b/main/solar-plume/GS/gs_diagonal_view.png new file mode 100644 index 0000000000000000000000000000000000000000..a65804eaf18a53dddcf1a71f230c93038da51c8f --- /dev/null +++ b/main/solar-plume/GS/gs_diagonal_view.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:389ff17e0718adca81b542b41e7bf1f6cc52da56add8600ed36fd5286a524bbe +size 276712 diff --git a/main/solar-plume/GS/gs_front_view.png b/main/solar-plume/GS/gs_front_view.png new file mode 100644 index 0000000000000000000000000000000000000000..36e3c628ca659986a16e5d80357612a470b7811e --- /dev/null +++ b/main/solar-plume/GS/gs_front_view.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5d8a1d5bd88088aa214df8947880db0055651aecff8f7f151414b8eba4225576 +size 271091 diff --git a/main/solar-plume/GS/gs_side_view.png b/main/solar-plume/GS/gs_side_view.png new file mode 100644 index 0000000000000000000000000000000000000000..8b2681ce6ac9842e1c87b89ee10f24b3826b653d --- /dev/null +++ b/main/solar-plume/GS/gs_side_view.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:488eb19cf4add55afd0d60b0cd3ecd67b61d5062669a121dd6d9f0232c469751 +size 250801 diff --git a/main/solar-plume/GS/solar-plume.pvsm b/main/solar-plume/GS/solar-plume.pvsm new file mode 100644 index 0000000000000000000000000000000000000000..a72b18f998862b2159a5aaf306bd9a41ccf12dbb --- /dev/null +++ b/main/solar-plume/GS/solar-plume.pvsm @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8aaee6182c88f7332e784996355981f07db683c49e0c034eab9d5427bbc81e9c +size 471516 diff --git a/main/solar-plume/data/solar-plume.txt b/main/solar-plume/data/solar-plume.txt new file mode 100644 index 0000000000000000000000000000000000000000..340729d7defb9186d3b019c1b4a00b4850e93831 --- /dev/null +++ b/main/solar-plume/data/solar-plume.txt @@ -0,0 +1,5 @@ +solar-plume (Vector) +Data Scalar Type: float +Data Byte Order: little Endian +Data Extent: 126x126x512 +Number of Scalar Components: 3 diff --git a/main/solar-plume/data/solar-plume_126x126x512_float32_scalar3.raw b/main/solar-plume/data/solar-plume_126x126x512_float32_scalar3.raw new file mode 100644 index 0000000000000000000000000000000000000000..0a275e8bac2e6b041dc4efffca8c3fe6b1ae3933 --- /dev/null +++ b/main/solar-plume/data/solar-plume_126x126x512_float32_scalar3.raw @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7818db54d4ea31e4cc60a26f6411fe04f2f37e773ed607b2036dd394b436f892 +size 97542144 diff --git a/main/solar-plume/task_description.txt b/main/solar-plume/task_description.txt new file mode 100644 index 0000000000000000000000000000000000000000..b479be840da41bf825d5b4eb9d47a62e0bce6aa1 --- /dev/null +++ b/main/solar-plume/task_description.txt @@ -0,0 +1,18 @@ +Task: + +Load the tornado dataset from "solar-plume/data/solar-plume_126x126x512_float32_scalar3.raw", the information about this dataset: +solar-plume (Vector) +Data Scalar Type: float +Data Byte Order: little Endian +Data Extent: 126x126x512 +Number of Scalar Components: 3 +Data loading is very important, make sure you correctly load the dataset according to their features. + +Add a “stream tracer” filter under the tornado data to display streamline, set the "Seed type" to "Point Cloud" and set the center of point cloud to 3D position [50, 50, 320] with a radius 30, then hide the point cloud sphere. + +Add a "tube" filter under the "stream tracer" filter to enhance the streamline visualization. Set the radius to 0.5. In the pipeline browser panel, hide everything except the "tube" filter. + + +Please think step by step and make sure to fulfill all the visualization goals mentioned above. + +Finally, save the paraview state as "solar-plume/results/{agent_mode}/solar-plume.pvsm" diff --git a/main/solar-plume/visualization_goals.txt b/main/solar-plume/visualization_goals.txt new file mode 100644 index 0000000000000000000000000000000000000000..fbba8318a02a8955713898362251664b3264ce83 --- /dev/null +++ b/main/solar-plume/visualization_goals.txt @@ -0,0 +1,7 @@ +1. Overall Visualization Goal: How well does the result reveal solar-plume flow structures using streamlines rendered as tubes, with emphasis on the region near [50, 50, 320]? + +2. Seeding (Point Cloud): Are streamlines seeded with a Point Cloud centered at [50, 50, 320] with a radius of 30, and is the seed sphere hidden? + +3. Streamline Visualization: Do the streamlines follow the flow patterns effectively and provide adequate coverage of the plume region? + +4. Tube Rendering & Visibility: Are the streamlines rendered as tubes with radius 0.5, and is only the Tube filter visible (Stream Tracer and seed display hidden)? diff --git a/main/supernova/.DS_Store b/main/supernova/.DS_Store new file mode 100644 index 0000000000000000000000000000000000000000..fa801660695b481f75f1e5c148cf62f7e970b921 Binary files /dev/null and b/main/supernova/.DS_Store differ diff --git a/main/supernova/GS/gs_diagonal_view.png b/main/supernova/GS/gs_diagonal_view.png new file mode 100644 index 0000000000000000000000000000000000000000..ff1bc21fd5de24cf82bddd16abd5a7d61e35caf9 --- /dev/null +++ b/main/supernova/GS/gs_diagonal_view.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c09397358f712ef4da2a8ba31f3a6d474df7869c256fcc5d1231222338b1ef98 +size 278103 diff --git a/main/supernova/GS/gs_front_view.png b/main/supernova/GS/gs_front_view.png new file mode 100644 index 0000000000000000000000000000000000000000..5a6b3d0f57a415d9813416be17a3850f07cb9504 --- /dev/null +++ b/main/supernova/GS/gs_front_view.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a4f79f852d76d0267e09517687c86b7ce4892b632ccdd078f524fd10bda8d796 +size 274309 diff --git a/main/supernova/GS/gs_side_view.png b/main/supernova/GS/gs_side_view.png new file mode 100644 index 0000000000000000000000000000000000000000..440622d048185791b043fd0fc9eff02d19a13b28 --- /dev/null +++ b/main/supernova/GS/gs_side_view.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:73d10622c11fa07dc2c30a9215e04cd0f7f32588fbbd3596baac96b0b44da3f1 +size 319932 diff --git a/main/supernova/GS/supernova_gs.pvsm b/main/supernova/GS/supernova_gs.pvsm new file mode 100644 index 0000000000000000000000000000000000000000..65805d0a47d99bcf7765134ede3d2cfbf9dce0a7 --- /dev/null +++ b/main/supernova/GS/supernova_gs.pvsm @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:22c09e249d30b845f08876fc48c05f0f338929e7c8f7a8e438f142900fa0dc11 +size 475414 diff --git a/main/supernova/data/supernova.txt b/main/supernova/data/supernova.txt new file mode 100644 index 0000000000000000000000000000000000000000..d991143c70fde3810e58221ec6daac3dfded6aed --- /dev/null +++ b/main/supernova/data/supernova.txt @@ -0,0 +1,5 @@ +Supernova (Scalar) +Data Scalar Type: float +Data Byte Order: little Endian +Data Spacing: 1x1x1 +Data Extent: 256x256x256 \ No newline at end of file diff --git a/main/supernova/data/supernova_256x256x256_float32.raw b/main/supernova/data/supernova_256x256x256_float32.raw new file mode 100644 index 0000000000000000000000000000000000000000..2282b381781b653d56d2566571a0afc368fdcb97 --- /dev/null +++ b/main/supernova/data/supernova_256x256x256_float32.raw @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:adb502e1a92bbd3aa773c0aa6679ca54d1e5e2405e0e9a289a6c2024cb37ef6b +size 67108864 diff --git a/main/supernova/task_description.txt b/main/supernova/task_description.txt new file mode 100644 index 0000000000000000000000000000000000000000..195056005279debe5acef5f475e04659effeb36c --- /dev/null +++ b/main/supernova/task_description.txt @@ -0,0 +1,15 @@ +Task: + +Load the supernova dataset from "supernova/data/supernova_256x256x256_float32.raw", the information about this dataset: +Supernova (Scalar) +Data Scalar Type: float +Data Byte Order: little Endian +Data Spacing: 1x1x1 +Data Extent: 256x256x256 +Data loading is very important, make sure you correctly load the dataset according to their features. + +Then visualize it and extract two isosurfaces. One of them use color red, showing areas with low density (isovalue 40 and opacity 0.4), while the other use color blue, showing areas with high density (isovalue 150 and opacity 0.8). + +Please think step by step and make sure to fulfill all the visualization goals mentioned above. Only make the two isosurfaces visible. + +Finally, save the paraview state as "supernova/results/{agent_mode}/supernova.pvsm" \ No newline at end of file diff --git a/main/supernova/visualization_goals.txt b/main/supernova/visualization_goals.txt new file mode 100644 index 0000000000000000000000000000000000000000..a1a31d9c761c2d9c3b8edd1e431bc869d5752a59 --- /dev/null +++ b/main/supernova/visualization_goals.txt @@ -0,0 +1,5 @@ +1. Overall Visualization Goal: How well does the result achieve the overall goal of showing the supernova structure with two distinct isosurfaces representing different density regions? + +2. Does the red isosurface show low density areas (outside regions) with lower opacity? + +3. Does the blue isosurface show high density areas (inside regions) with higher opacity? \ No newline at end of file diff --git a/main/tangaroa/GS/gs_diagonal_view.png b/main/tangaroa/GS/gs_diagonal_view.png new file mode 100644 index 0000000000000000000000000000000000000000..62e22490fe40ae6d16b03277a83451108e63303b --- /dev/null +++ b/main/tangaroa/GS/gs_diagonal_view.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:daee9ebdbd99aea3bb7f4e80b4ca4888528629cd5e6548686696b7b29886503a +size 677285 diff --git a/main/tangaroa/GS/gs_front_view.png b/main/tangaroa/GS/gs_front_view.png new file mode 100644 index 0000000000000000000000000000000000000000..86d91db0d4cfe53674d07b7ba8c49f6297261217 --- /dev/null +++ b/main/tangaroa/GS/gs_front_view.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a5f5ffda65b306dfdaad99692ea2925ccdb5b2689f2573c4926bdd2dbf2c9157 +size 444738 diff --git a/main/tangaroa/GS/gs_side_view.png b/main/tangaroa/GS/gs_side_view.png new file mode 100644 index 0000000000000000000000000000000000000000..90420f3afb089e9002dc40a4fd50cff462d8178c --- /dev/null +++ b/main/tangaroa/GS/gs_side_view.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:93293ef234c66eda106f354bc2d01f2e16d9bd462b54ccd127ba84c749ef2a08 +size 523503 diff --git a/main/tangaroa/GS/tangaroa_gs.pvsm b/main/tangaroa/GS/tangaroa_gs.pvsm new file mode 100644 index 0000000000000000000000000000000000000000..1de5effe2381b7723838b50cf5ac0e89895831d8 --- /dev/null +++ b/main/tangaroa/GS/tangaroa_gs.pvsm @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0fa459d8d68bf005629f20b755dc02ad1a25b81fc8a745cddf2f19683195c422 +size 471553 diff --git a/main/tangaroa/data/tangaroa.txt b/main/tangaroa/data/tangaroa.txt new file mode 100755 index 0000000000000000000000000000000000000000..1092f1987e58fe4002407265335ed406def543e6 --- /dev/null +++ b/main/tangaroa/data/tangaroa.txt @@ -0,0 +1,5 @@ +tangaroa (Vector) +Data Scalar Type: float +Data Byte Order: little Endian +Data Extent: 300x180x120 +Number of Scalar Components: 3 diff --git a/main/tangaroa/data/tangaroa_300x180x120_float32_scalar3.raw b/main/tangaroa/data/tangaroa_300x180x120_float32_scalar3.raw new file mode 100644 index 0000000000000000000000000000000000000000..67dbd809aec592b6f0998e1065e69527904089ae --- /dev/null +++ b/main/tangaroa/data/tangaroa_300x180x120_float32_scalar3.raw @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:52066e6f74e8322bb00961bf4c30073a2453ea248e20d14786008c659be5ecfc +size 77760000 diff --git a/main/tangaroa/task_description.txt b/main/tangaroa/task_description.txt new file mode 100644 index 0000000000000000000000000000000000000000..e26b4ab5db57f33c60f8149d2d42777e54c5c63f --- /dev/null +++ b/main/tangaroa/task_description.txt @@ -0,0 +1,19 @@ +Task: + +Load the tangaroa dataset from "tangaroa_300x180x120_float32_scalar3.raw", the information about this dataset: +tangaroa (Vector) +Data Scalar Type: float +Data Byte Order: little Endian +Data Extent: 300x180x120 +Number of Scalar Components: 3 +Data loading is very important, make sure you correctly load the dataset according to their features. + +Apply "streamline tracer" filter, set the "Seed Type" to point cloud, turn off the "show sphere", set the center to [81.6814, 80.708, 23.5093], and radius to 29.9 + +Add "Ribbon" filter to the streamline tracer results and set width to 0.3, set the Display representation to Surface. + +In pipeline browser panel, hide everything except the ribbon filter results. + +Please think step by step and make sure to fulfill all the visualization goals mentioned above. + +Finally, save the paraview state as "tangaroa/results/{agent_mode}/tangaroa.pvsm" diff --git a/main/tangaroa/visualization_goals.txt b/main/tangaroa/visualization_goals.txt new file mode 100644 index 0000000000000000000000000000000000000000..76e7b86ed09f462c95a3856c016e206889a3e0e8 --- /dev/null +++ b/main/tangaroa/visualization_goals.txt @@ -0,0 +1,5 @@ +1. Overall Visualization Goal: How well does the result reveal the tangaroa flow structures using streamlines expanded into surfaces with the Ribbon filter? + +2. Streamline Seeding: Are streamlines correctly seeded from a Point Cloud centered at [81.6814, 80.708, 23.5093] with radius 29.9, and is the seed sphere hidden? + +3. Ribbon Visualization: Are the streamlines rendered with the Ribbon filter, set to width 0.3, with Display representation as Surface, effectively showing flow surfaces? diff --git a/main/tornado/.DS_Store b/main/tornado/.DS_Store new file mode 100644 index 0000000000000000000000000000000000000000..d83e742e965779286bc555ce9547dfdce334f754 Binary files /dev/null and b/main/tornado/.DS_Store differ diff --git a/main/tornado/GS/.DS_Store b/main/tornado/GS/.DS_Store new file mode 100644 index 0000000000000000000000000000000000000000..83c588db58fbdbfa77ff940ca6a9ae4c88711de5 Binary files /dev/null and b/main/tornado/GS/.DS_Store differ diff --git a/main/tornado/GS/gs_diagonal_view.png b/main/tornado/GS/gs_diagonal_view.png new file mode 100644 index 0000000000000000000000000000000000000000..5888077d9d77a267624064323d19a5c97ea4a979 --- /dev/null +++ b/main/tornado/GS/gs_diagonal_view.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:49af2e632fd82221e826b74f9b88bfd8afb19d655d813b664c5f2da304a60a26 +size 484144 diff --git a/main/tornado/GS/gs_front_view.png b/main/tornado/GS/gs_front_view.png new file mode 100644 index 0000000000000000000000000000000000000000..b2646b3720bf044090480339dd6db565a9749f1d --- /dev/null +++ b/main/tornado/GS/gs_front_view.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:812375bd3867ebfa42188641ed2d94408d64d6be93387dd7e345a90a54dd8232 +size 597258 diff --git a/main/tornado/GS/gs_side_view.png b/main/tornado/GS/gs_side_view.png new file mode 100644 index 0000000000000000000000000000000000000000..28a8f007c5fe310ee66a3961280387f36d8caa87 --- /dev/null +++ b/main/tornado/GS/gs_side_view.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:16904f34f00c8d366ea2ba16e6fa1576b65fdd9b249f2dca75af7875b10af050 +size 311675 diff --git a/main/tornado/GS/tornado_gs.pvsm b/main/tornado/GS/tornado_gs.pvsm new file mode 100644 index 0000000000000000000000000000000000000000..16201657bf71265f2e338f29988bbaa0aa7cf6cc --- /dev/null +++ b/main/tornado/GS/tornado_gs.pvsm @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d0925273b0be89973865699b20ca9bc92214a10c0d69aa785bc7366ac96ec3e3 +size 597880 diff --git a/main/tornado/data/tornado.txt b/main/tornado/data/tornado.txt new file mode 100755 index 0000000000000000000000000000000000000000..38f9d225af8dec7692aafe5a3628aa1cfd86f270 --- /dev/null +++ b/main/tornado/data/tornado.txt @@ -0,0 +1,5 @@ +Tornado (Vector) +Data Scalar Type: float +Data Byte Order: little Endian +Data Extent: 64x64x64 +Number of Scalar Components: 3 \ No newline at end of file diff --git a/main/tornado/data/tornado_64x64x64_float32_scalar3.raw b/main/tornado/data/tornado_64x64x64_float32_scalar3.raw new file mode 100755 index 0000000000000000000000000000000000000000..1e36bec6fb382692ad7edcdb9088973bb6ef86b3 --- /dev/null +++ b/main/tornado/data/tornado_64x64x64_float32_scalar3.raw @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:06445bb861bfe3f46f14fc5569010c3ab23362425eff816b829a9063a69998df +size 3145728 diff --git a/main/tornado/task_description.txt b/main/tornado/task_description.txt new file mode 100644 index 0000000000000000000000000000000000000000..7af561c602f067d502ee3d057e988facbde2c4b6 --- /dev/null +++ b/main/tornado/task_description.txt @@ -0,0 +1,19 @@ +Task: + +Load the tornado dataset from "tornado/data/tornado_64x64x64_float32_scalar3.raw", the information about this dataset: +Tornado (Vector) +Data Scalar Type: float +Data Byte Order: little Endian +Data Extent: 64x64x64 +Number of Scalar Components: 3 +Data loading is very important, make sure you correctly load the dataset according to their features. + +Add a “glyph” filter under the tornado data to display velocity glyph, set an appropriate “Scale Factor” so the glyphs are visible. + +Then add a “stream tracer” filter under the tornado data to generate streamlines. Choose “Point Cloud” as “Seed Type”, and do not show sphere. + +Add a “tube” filter under the stream tracer you just created to generate tubes for visualizing the streamlines. Set an appropriate radius. Make the stream tracer invisible and the tube visible. At last, render the streamlines as tubes. + +Please think step by step and make sure to fulfill all the visualization goals mentioned above. + +Finally, save the paraview state as "tornado/results/{agent_mode}/tornado.pvsm" \ No newline at end of file diff --git a/main/tornado/visualization_goals.txt b/main/tornado/visualization_goals.txt new file mode 100644 index 0000000000000000000000000000000000000000..9383d8f9b6518512955ac3c9476b19c3ac75c574 --- /dev/null +++ b/main/tornado/visualization_goals.txt @@ -0,0 +1,7 @@ +1. Overall Visualization Goal: How well does the result achieve the overall goal of showing tornado flow patterns with glyphs and streamlines? + +2. Glyph Visualization: Does the result show velocity glyphs that are appropriately sized and visible? + +3. Streamline Visualization: Does the result show streamlines that follow the flow patterns effectively? + +4. Tube Rendering: Are the streamlines rendered as tubes with appropriate thickness? \ No newline at end of file diff --git a/main/vortex/GS/gs_diagonal_view.png b/main/vortex/GS/gs_diagonal_view.png new file mode 100644 index 0000000000000000000000000000000000000000..1fb49bfa671543ce88cf8ecb26f5aa5eb888307c --- /dev/null +++ b/main/vortex/GS/gs_diagonal_view.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5f3af7bf0bf7baf1125bd1ffe49ba729dbecd9210f0c48179992f3298de994e7 +size 526069 diff --git a/main/vortex/GS/gs_front_view.png b/main/vortex/GS/gs_front_view.png new file mode 100644 index 0000000000000000000000000000000000000000..d5960f6b627f3b550bfdaaf54fcb67d087734336 --- /dev/null +++ b/main/vortex/GS/gs_front_view.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7314c40dfd14349e3471fd132738046b6bb299503dc9ed82441168da4521a8a7 +size 413150 diff --git a/main/vortex/GS/gs_side_view.png b/main/vortex/GS/gs_side_view.png new file mode 100644 index 0000000000000000000000000000000000000000..bc9d347c02c4710afb5405cc128dff390ac894f9 --- /dev/null +++ b/main/vortex/GS/gs_side_view.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2a62b2d12454c66f35f09ae0e410c3b57cfdbb7345c098aeee9146d2fd10c222 +size 446562 diff --git a/main/vortex/GS/vortex.pvsm b/main/vortex/GS/vortex.pvsm new file mode 100644 index 0000000000000000000000000000000000000000..50621dad96e5de3021b916bbfc510b3ed024d8ba --- /dev/null +++ b/main/vortex/GS/vortex.pvsm @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:bf024cc6141b80cf7138c8f72195b406b313edb7fd7a5bf9a2ff5950f61e2eb9 +size 354037 diff --git a/main/vortex/data/vortex.txt b/main/vortex/data/vortex.txt new file mode 100644 index 0000000000000000000000000000000000000000..eebe20ab263b1cc40b7d6ae9da90ea93994777d9 --- /dev/null +++ b/main/vortex/data/vortex.txt @@ -0,0 +1,5 @@ +vortex (Scalar) +Data Scalar Type: float +Data Byte Order: little Endian +Data Extent: 128x128x128 +Number of Scalar Components: 1 diff --git a/main/vortex/data/vortex_128x128x128_float32.raw b/main/vortex/data/vortex_128x128x128_float32.raw new file mode 100644 index 0000000000000000000000000000000000000000..0dc84413aa86228f5bb2939e89013c28601a6703 --- /dev/null +++ b/main/vortex/data/vortex_128x128x128_float32.raw @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:33071ee7228efc8ad31741923cb0419d3d4a39f2d693e49da6ba3356750c01b4 +size 8388608 diff --git a/main/vortex/task_description.txt b/main/vortex/task_description.txt new file mode 100644 index 0000000000000000000000000000000000000000..fc6905e4e4307c9e55b9f812996f86d2b4251713 --- /dev/null +++ b/main/vortex/task_description.txt @@ -0,0 +1,23 @@ +Task: + +Load the vortex dataset from "vortex/data/vortex_128x128x128_float32.raw", the information about this dataset: +vortex (Scalar) +Data Scalar Type: float +Data Byte Order: little Endian +Data Extent: 128x128x128 +Number of Scalar Components: 1 + +Instructions: + +1. Load the dataset into ParaView. + +2. Leverage "contour" filter to achieve iso-surface rendering. In pipeline browser panel, hide everything except the "contour" fileter. + +3. In properties panel of "contour" filter, set isosurface value to -0.2, use Solid Color and set the color as beige. + +4. Enable Ambient occlusion by toggle the "Use Ambient Occlusion" button in the Render Passes. + +5. Add head light with light inspector, set "Coords" as Camera, "Intentsity" to 0.2, Type to "Directional". + +6. Save your work: +Save the ParaView state as "vortex/results/{agent_mode}/vortex.pvsm". diff --git a/main/vortex/visualization_goals.txt b/main/vortex/visualization_goals.txt new file mode 100644 index 0000000000000000000000000000000000000000..1e790154d126ab1d2fb4b9ce2c46894504f59ab2 --- /dev/null +++ b/main/vortex/visualization_goals.txt @@ -0,0 +1,5 @@ +1. Overall Visualization Goal: How well does the result present a clear iso-surface rendering of the vortex scalar field at value −0.2? + +2. Contour Appearance: Is the contour rendered with Solid Color set to beige and made the only visible object in the pipeline? + +3. Lighting & Shading: Are Ambient Occlusion and a directional head light (Coords = Camera, Intensity = 0.2) applied? diff --git a/napari_mcp_evals/.DS_Store b/napari_mcp_evals/.DS_Store new file mode 100644 index 0000000000000000000000000000000000000000..fbb83ac555679c5c390ebf511f7751a774c4bdd4 Binary files /dev/null and b/napari_mcp_evals/.DS_Store differ diff --git a/napari_mcp_evals/eval_claude.yaml b/napari_mcp_evals/eval_claude.yaml new file mode 100644 index 0000000000000000000000000000000000000000..3636a3b5df2a0a7f9d6b8975973571aa9eac3ad1 --- /dev/null +++ b/napari_mcp_evals/eval_claude.yaml @@ -0,0 +1,34 @@ +providers: + - id: python:general_mcp_client.py + config: + use_claude: true + model: claude-sonnet-4-20250514 + mcp: + enabled: true + server: + command: "C:/Users/miao1/AppData/Local/anaconda3/envs/mcp/python.exe" + args: ["-u", "D:/Development/napari-mcp/src/napari_mcp/napari_mcp_server.py"] + cwd: "D:/Development/napari-mcp/eval" + name: napari-server + env: + PORT: "3000" + temperature: 0 + max_tokens: 4096 + executeTools: true + +prompts: + - | + {{question}} + +# Configure default test settings for model-graded assertions +defaultTest: + options: + runSerially: true + provider: anthropic:messages:claude-sonnet-4-20250514 + +tests: file://test_basic_functions.yaml + + +evaluateOptions: + cache: false + maxConcurrency: 1 \ No newline at end of file diff --git a/napari_mcp_evals/eval_livai.yaml b/napari_mcp_evals/eval_livai.yaml new file mode 100644 index 0000000000000000000000000000000000000000..006891206015ed25a70078ac870c41c7ba8b8319 --- /dev/null +++ b/napari_mcp_evals/eval_livai.yaml @@ -0,0 +1,45 @@ +providers: + - id: python:general_mcp_client.py + config: + cache: false + provider: litellm + model: gpt-4o + baseUrl: https://livai-api.llnl.gov/v1 + verifySSL: false + useProxy: false + + mcp: + enabled: true + server: + command: "C:/Users/miao1/AppData/Local/anaconda3/envs/mcp/python.exe" + args: ["-u", "D:/Development/napari-mcp/src/napari_mcp/napari_mcp_server.py"] + cwd: "D:/Development/napari-mcp/eval" + name: napari-server + env: + PORT: "3000" + temperature: 0 + # max_tokens: 4096 + executeTools: true + +prompts: + - | + {{question}} + +# Configure default test settings for model-graded assertions +defaultTest: + options: + # Override the default provider for model-graded assertions + provider: + id: litellm:gpt-4o + config: + apiBaseUrl: https://livai-api.llnl.gov/v1 + # For OpenAI-compatible endpoints served by LiteLLM + openaiCompatible: true + verifySSL: false + useProxy: false + +tests: file://test_basic_functions.yaml + +evaluateOptions: + cache: false + maxConcurrency: 1 \ No newline at end of file diff --git a/napari_mcp_evals/general_mcp_client.py b/napari_mcp_evals/general_mcp_client.py new file mode 100644 index 0000000000000000000000000000000000000000..e73ca5ece0c026f804f78e72883e4477bc09fd03 --- /dev/null +++ b/napari_mcp_evals/general_mcp_client.py @@ -0,0 +1,784 @@ +#!/usr/bin/env python3 +""" +MCP Client with support for multiple LLM providers (Claude, OpenAI, etc.) +Now with image support for both providers +""" +import asyncio +import os +import json +import base64 +from typing import Dict, Any, Optional, List, Union +from contextlib import AsyncExitStack +from abc import ABC, abstractmethod +import pdb +# Try to load .env file if it exists +try: + from dotenv import load_dotenv + load_dotenv() +except ImportError: + pass # dotenv not installed, skip + +from mcp import ClientSession, StdioServerParameters +from mcp.client.stdio import stdio_client + + +def call_api(prompt: str, options: Dict[str, Any], context: Dict[str, Any]) -> Dict[str, Any]: + """ + Promptfoo provider entry point + """ + loop = asyncio.new_event_loop() + asyncio.set_event_loop(loop) + + try: + client = MCPClient() + result = loop.run_until_complete(client.process_prompt(prompt, options)) + return result + finally: + loop.close() + + +class LLMProvider(ABC): + """Abstract base class for LLM providers""" + + @abstractmethod + async def create_completion(self, messages: List[Dict], tools: List[Dict], **kwargs) -> Dict: + """Create a completion with tool support""" + pass + + @abstractmethod + def parse_tool_calls(self, response: Dict) -> List[Dict]: + """Parse tool calls from response""" + pass + + @abstractmethod + def format_tool_result(self, tool_id: str, result: Any, images: Optional[List[Dict]] = None) -> Dict: + """Format tool result for the provider""" + pass + + +class ClaudeProvider(LLMProvider): + """Claude/Anthropic provider""" + + def __init__(self, api_key: str, model: str = "claude-sonnet-4-20250514"): + from anthropic import Anthropic + self.client = Anthropic(api_key=api_key) + self.model = model + + async def create_completion(self, messages: List[Dict], tools: List[Dict], **kwargs) -> Dict: + """Create completion using Claude""" + response = self.client.messages.create( + model=self.model, + max_tokens=kwargs.get('max_tokens', 4096), + messages=messages, + tools=tools + ) + return response + + def parse_tool_calls(self, response: Dict) -> List[Dict]: + """Parse Claude's tool calls""" + tool_calls = [] + for content in response.content: + if content.type == 'tool_use': + tool_calls.append({ + 'id': content.id, + 'name': content.name, + 'arguments': content.input, + 'type': 'tool_use' + }) + return tool_calls + + def format_tool_result(self, tool_id: str, result: Any, images: Optional[List[Dict]] = None) -> Dict: + """Format tool result for Claude""" + # If we have images, format as mixed content + if images: + tool_result_content = [] + + # Add text content if present + if result: + tool_result_content.append({ + "type": "text", + "text": str(result) if not isinstance(result, str) else result + }) + + # Add image content + for img in images: + tool_result_content.append({ + "type": "image", + "source": { + "type": "base64", + "media_type": img['mime_type'], + "data": img['data'] + } + }) + + return { + "type": "tool_result", + "tool_use_id": tool_id, + "content": tool_result_content + } + else: + # Text-only result (backward compatibility) + return { + "type": "tool_result", + "tool_use_id": tool_id, + "content": str(result) if not isinstance(result, str) else result + } + +class OpenAIProvider(LLMProvider): + """OpenAI/OpenAI-compatible provider + + Note: OpenAI's support for images in tool results is not clearly documented. + This implementation attempts to use content arrays with images when available, + but may fall back to text-only format if the API doesn't support it. + """ + + def __init__(self, api_key: str, base_url: Optional[str] = None, model: str = "gpt-4", extra_headers: Optional[Dict] = None, verify_ssl: Union[bool, str] = True, use_proxy: bool = True): + from openai import OpenAI + import httpx + import ssl + + # Clean up base_url - remove /chat/completions if present + if base_url and base_url.endswith('/chat/completions'): + base_url = base_url.replace('/chat/completions', '') + + # For LiteLLM proxy, we might need different headers + default_headers = extra_headers or {} + + # Try different header formats for LiteLLM compatibility + if base_url and ('litellm' in base_url.lower() or 'llnl.gov' in base_url): + # LiteLLM often uses 'api-key' header instead of 'Authorization' + default_headers['api-key'] = api_key + + # Configure httpx client + client_kwargs = { + "timeout": httpx.Timeout(30.0, connect=10.0), # 30s total, 10s connect + } + + # Handle proxy settings + if not use_proxy: + client_kwargs["proxy"] = None + + # Handle SSL verification + if verify_ssl is False: + client_kwargs["verify"] = False + elif isinstance(verify_ssl, str) and os.path.exists(verify_ssl): + # Use specific certificate file + ssl_context = ssl.create_default_context(cafile=verify_ssl) + client_kwargs["verify"] = ssl_context + elif verify_ssl is True: + # Try to use system certificates if available + ssl_cert_file = os.environ.get('SSL_CERT_FILE') or os.environ.get('CURL_CA_BUNDLE') + if ssl_cert_file and os.path.exists(ssl_cert_file): + ssl_context = ssl.create_default_context(cafile=ssl_cert_file) + client_kwargs["verify"] = ssl_context + + # Create httpx client with all settings + http_client = httpx.Client(**client_kwargs) + + self.client = OpenAI( + api_key=api_key, + base_url=base_url, + default_headers=default_headers if default_headers else None, + http_client=http_client + ) + self.model = model + + async def create_completion(self, messages: List[Dict], tools: List[Dict], **kwargs) -> Dict: + """Create completion using OpenAI""" + # Convert tools to OpenAI format + openai_tools = [] + for tool in tools: + openai_tools.append({ + "type": "function", + "function": { + "name": tool["name"], + "description": tool["description"], + "parameters": tool["input_schema"] + } + }) + + # Convert messages to OpenAI format + openai_messages = self._convert_messages(messages) + + response = self.client.chat.completions.create( + model=self.model, + messages=openai_messages, + tools=openai_tools if openai_tools else None, + max_tokens=kwargs.get('max_tokens', 4096) + ) + return response + + def _convert_messages(self, messages: List[Dict]) -> List[Dict]: + """Convert messages to OpenAI format""" + openai_messages = [] + for msg in messages: + role = msg.get('role', '') + + if role == 'tool': + # Handle tool messages + tool_call_id = msg.get('tool_call_id', '') + content = msg.get('content', '') + has_images = False + + # Check if content has images + if isinstance(content, list): + has_images = any( + item.get('type') == 'image_url' or + (item.get('type') == 'image' and item.get('source', {}).get('type') == 'base64') + for item in content if isinstance(item, dict) + ) + + # If tool message has images, we need to split it + if has_images: + # First, add a placeholder tool message to satisfy OpenAI's requirement + text_content = [] + image_content = [] + + # Separate text and image content + if isinstance(content, list): + for item in content: + if isinstance(item, dict): + if item.get('type') == 'text': + text_content.append(item.get('text', '')) + elif item.get('type') in ['image_url', 'image']: + image_content.append(item) + + # Add tool message with just text (placeholder if no text) + tool_text = ' '.join(text_content) if text_content else "Image result returned - see below" + openai_messages.append({ + 'role': 'tool', + 'tool_call_id': tool_call_id, + 'content': tool_text + }) + + # Then add user message with the image + user_content = [] + + # Add header to indicate this is a tool result image + user_content.append({ + "type": "text", + "text": f"[Tool Result Image for call {tool_call_id}]:" + }) + + # Add any text content + if text_content: + user_content.append({ + "type": "text", + "text": ' '.join(text_content) + }) + + # Process image content + for item in image_content: + if item.get('type') == 'image_url': + # Already in correct format + user_content.append(item) + elif item.get('type') == 'image': + # Convert from Claude format + source = item.get('source', {}) + if source.get('type') == 'base64': + user_content.append({ + "type": "image_url", + "image_url": { + "url": f"data:{source.get('media_type', 'image/png')};base64,{source.get('data', '')}" + } + }) + + openai_messages.append({ + 'role': 'user', + 'content': user_content + }) + else: + # No images, use standard tool message format + tool_msg = { + 'role': 'tool', + 'tool_call_id': tool_call_id + } + + if isinstance(content, list): + # Extract text content from list + text_parts = [] + for item in content: + if isinstance(item, dict) and item.get('type') == 'text': + text_parts.append(item.get('text', '')) + tool_msg['content'] = ' '.join(text_parts) if text_parts else str(content) + else: + # Text-only content + tool_msg['content'] = str(content) + + openai_messages.append(tool_msg) + + elif role in ['user', 'assistant', 'system']: + # Handle different content types + content = msg.get('content', '') + + if isinstance(content, str): + openai_messages.append({ + 'role': role, + 'content': content + }) + elif isinstance(content, list): + # Handle mixed content (text, tool results, etc.) + content_parts = [] + tool_calls = [] + + for item in content: + if isinstance(item, dict): + if item.get('type') == 'text': + content_parts.append(item.get('text', '')) + elif item.get('type') == 'tool_use': + tool_calls.append({ + 'id': item.get('id', ''), + 'type': 'function', + 'function': { + 'name': item.get('name', ''), + 'arguments': json.dumps(item.get('input', {})) + } + }) + elif item.get('type') == 'tool_result': + # Convert tool results to function messages + # Handle both text-only and mixed content results + tool_content = item.get('content', '') + if isinstance(tool_content, list): + # Mixed content with possible images + # Try to preserve content array format if possible + content_array = [] + text_parts = [] # Fallback for text-only + + for content_item in tool_content: + if content_item.get('type') == 'text': + content_array.append({ + "type": "text", + "text": content_item.get('text', '') + }) + text_parts.append(content_item.get('text', '')) + elif content_item.get('type') == 'image': + source = content_item.get('source', {}) + if source.get('type') == 'base64': + # Try to use image_url format + content_array.append({ + "type": "image_url", + "image_url": { + "url": f"data:{source.get('media_type', 'image/png')};base64,{source.get('data', '')}" + } + }) + text_parts.append(f"[IMAGE: {source.get('media_type', 'unknown')}]") + + # Try content array format first (may work with newer API versions) + # If it fails, the API will return an error and you can fall back to text-only + openai_messages.append({ + 'role': 'tool', + 'content': content_array if content_array else '\n'.join(text_parts), + 'tool_call_id': item.get('tool_use_id', '') + }) + else: + # Text-only content + openai_messages.append({ + 'role': 'tool', + 'content': str(tool_content), + 'tool_call_id': item.get('tool_use_id', '') + }) + else: + # Handle other content types + content_parts.append(str(item)) + + if content_parts or tool_calls: + msg_dict = {'role': role} + if content_parts: + msg_dict['content'] = '\n'.join(content_parts) + if tool_calls: + msg_dict['tool_calls'] = tool_calls + if msg_dict.get('content') or msg_dict.get('tool_calls'): + openai_messages.append(msg_dict) + elif content is None: + # Handle messages with no content (e.g., tool-only messages) + openai_messages.append({ + 'role': role, + 'content': '' + }) + + # Handle tool_calls if present + if 'tool_calls' in msg and msg['tool_calls']: + # Ensure we have a message dict + if not openai_messages or openai_messages[-1]['role'] != role: + openai_messages.append({'role': role, 'content': ''}) + + # Add tool_calls to the last message + openai_messages[-1]['tool_calls'] = msg['tool_calls'] + + # Debug print + # print("openAI message", openai_messages) + return openai_messages + + def parse_tool_calls(self, response) -> List[Dict]: + """Parse OpenAI's tool calls""" + tool_calls = [] + message = response.choices[0].message + + if hasattr(message, 'tool_calls') and message.tool_calls: + for tool_call in message.tool_calls: + tool_calls.append({ + 'id': tool_call.id, + 'name': tool_call.function.name, + 'arguments': json.loads(tool_call.function.arguments), + 'type': 'function' + }) + return tool_calls + + def format_tool_result(self, tool_id: str, result: Any, images: Optional[List[Dict]] = None) -> Dict: + """Format tool result for OpenAI + + Note: If the API rejects content arrays in tool messages, you may need to: + 1. Fall back to text-only format + 2. Upload images to a URL service and reference them + 3. Include images in a subsequent user message instead + """ + # Try to use content array format similar to Claude if images are present + if images: + # Attempt to use content array format (may or may not be supported) + content_array = [] + + # Add text content if present + if result: + content_array.append({ + "type": "text", + "text": str(result) if not isinstance(result, str) else result + }) + + # Add image content + for img in images: + content_array.append({ + "type": "image_url", + "image_url": { + "url": f"data:{img['mime_type']};base64,{img['data']}" + } + }) + + # Try content array format first + return { + "role": "tool", + "content": content_array, + "tool_call_id": tool_id + } + else: + # Text-only result (standard format) + return { + "role": "tool", + "content": str(result) if not isinstance(result, str) else result, + "tool_call_id": tool_id + } + + +class MCPClient: + def __init__(self): + """Initialize MCP client""" + self.session: Optional[ClientSession] = None + self.exit_stack = AsyncExitStack() + self.llm_provider: Optional[LLMProvider] = None + + def _initialize_llm_provider(self, config: Dict[str, Any]): + """Initialize the appropriate LLM provider based on config""" + provider = config.get('provider', 'claude').lower() + + if provider == 'claude': + api_key = config.get('apiKey') or os.environ.get('ANTHROPIC_API_KEY') + if api_key: + model = config.get('model', 'claude-sonnet-4-20250514') + self.llm_provider = ClaudeProvider(api_key, model) + elif provider in ['openai', 'local', 'litellm']: + api_key = config.get('apiKey') or os.environ.get('OPENAI_API_KEY', 'dummy-key') + base_url = config.get('baseUrl') or os.environ.get('OPENAI_BASE_URL') + model = config.get('model', 'gpt-4o') + + # SSL verification setting (default True, can be disabled for internal/self-signed certs) + verify_ssl = config.get('verifySSL', True) + + # Proxy setting (default True, can be disabled) + use_proxy = config.get('useProxy', True) + + self.llm_provider = OpenAIProvider(api_key, base_url, model, None, verify_ssl, use_proxy) + else: + raise ValueError(f"Unsupported provider: {provider}") + + async def connect_to_server(self, server_config: Dict[str, Any]): + """Connect to an MCP server""" + command = server_config.get('command', 'python') + args = server_config.get('args', []) + cwd = server_config.get('cwd') + env = server_config.get('env') + + server_params = StdioServerParameters( + command=command, + args=args, + cwd=cwd, + env=env + ) + + stdio_transport = await self.exit_stack.enter_async_context( + stdio_client(server_params) + ) + self.stdio, self.write = stdio_transport + + self.session = await self.exit_stack.enter_async_context( + ClientSession(self.stdio, self.write) + ) + + await self.session.initialize() + + async def process_with_llm(self, prompt: str, debug: bool = False) -> str: + """Process prompt using the configured LLM provider""" + if not self.llm_provider: + return "Error: No LLM provider configured" + + if not self.session: + return "Error: No MCP session available" + + # Get available tools from MCP server + tools_response = await self.session.list_tools() + available_tools = [{ + "name": tool.name, + "description": tool.description, + "input_schema": tool.inputSchema + } for tool in tools_response.tools] + + # Initialize messages + messages = [{"role": "user", "content": prompt}] + + # Process output - only collect actual LLM responses + final_text: List[str] = [] + collected_images: List[Dict[str, str]] = [] + + # Add debug information if enabled + if debug: + final_text.append(f"[MCP Server Connected - {len(available_tools)} tools available]") + tool_names = [tool['name'] for tool in available_tools] + final_text.append(f"[Available tools: {', '.join(tool_names)}]") + final_text.append("") + + # Log provider type + provider_type = type(self.llm_provider).__name__ + final_text.append(f"[Using LLM Provider: {provider_type}]") + final_text.append("") + + # Continue conversation until LLM stops using tools + max_iterations = 15 # Reduced from 20 to prevent loops + iteration = 0 + + while iteration < max_iterations: + iteration += 1 + + try: + # Get LLM's response + response = await self.llm_provider.create_completion(messages, available_tools) + + # Extract text content based on provider + if isinstance(self.llm_provider, ClaudeProvider): + # Claude response handling + assistant_content = [] + has_tool_use = False + + for content in response.content: + if content.type == 'text': + final_text.append(content.text) + assistant_content.append(content) + elif content.type == 'tool_use': + has_tool_use = True + assistant_content.append(content) + + messages.append({ + "role": "assistant", + "content": assistant_content + }) + + elif isinstance(self.llm_provider, OpenAIProvider): + # OpenAI response handling + message = response.choices[0].message + has_tool_use = bool(getattr(message, 'tool_calls', None)) + + # Check if there's any actual content to add + if message.content: + final_text.append(message.content) + + # Build assistant message + assistant_msg = {"role": "assistant"} + if message.content: + assistant_msg["content"] = message.content + else: + # OpenAI sometimes returns None content with tool calls + assistant_msg["content"] = "" + + if has_tool_use: + assistant_msg["tool_calls"] = [ + { + "id": tc.id, + "type": "function", + "function": { + "name": tc.function.name, + "arguments": tc.function.arguments + } + } for tc in message.tool_calls + ] + messages.append(assistant_msg) + + # Process tool calls if any + if has_tool_use: + tool_calls = self.llm_provider.parse_tool_calls(response) + tool_results = [] + + for tool_call in tool_calls: + tool_name = tool_call['name'] + tool_args = tool_call['arguments'] + tool_id = tool_call['id'] + + # Add debug information if enabled + if debug: + final_text.append(f"\n[MCP Tool Call: {tool_name}]") + final_text.append(f"[Arguments: {tool_args}]") + + try: + result = await self.session.call_tool(tool_name, tool_args) + + if debug: + final_text.append(f"[Tool call successful]") + + # Extract result content (including images) + result_text = "" + result_images = [] + + if result and result.content: + if isinstance(result.content, list): + for item in result.content: + if hasattr(item, 'text'): + result_text += item.text + "\n" + elif hasattr(item, 'type') and item.type == 'image': + # Handle image content + result_images.append({ + 'data': item.data, + 'mime_type': item.mimeType + }) + result_text += f"[IMAGE: {item.mimeType}]\n" + else: + result_text += str(item) + "\n" + elif hasattr(result.content, 'text'): + result_text = result.content.text + elif hasattr(result.content, 'type') and result.content.type == 'image': + result_images.append({ + 'data': result.content.data, + 'mime_type': result.content.mimeType + }) + result_text = f"[IMAGE: {result.content.mimeType}]" + else: + result_text = str(result.content) + + if debug: + final_text.append("") + + # Keep images for later embedding if debug is enabled + if debug and result_images: + collected_images.extend(result_images) + + # Format result for provider + tool_results.append( + self.llm_provider.format_tool_result( + tool_id, + result_text.strip() if result_text else "", + result_images if result_images else None + ) + ) + + except Exception as e: + if debug: + final_text.append(f"[Error: {str(e)}]") + tool_results.append( + self.llm_provider.format_tool_result( + tool_id, + f"Error: {str(e)}" + ) + ) + + # If there were tool uses, add the results and continue + if isinstance(self.llm_provider, ClaudeProvider): + messages.append({ + "role": "user", + "content": tool_results + }) + elif isinstance(self.llm_provider, OpenAIProvider): + # OpenAI adds tool results as separate messages + for result in tool_results: + messages.append(result) + else: + # No more tool uses, we're done + break + + except Exception as e: + if debug: + final_text.append(f"\n[Error during LLM call: {type(e).__name__}: {str(e)}]") + else: + final_text.append(f"Error during LLM call: {type(e).__name__}: {str(e)}") + break + + if iteration >= max_iterations: + if debug: + final_text.append("\n[Warning: Maximum iteration limit reached]") + else: + final_text.append("Warning: Maximum iteration limit reached") + + # Embed collected images in Markdown if debug is enabled + if debug and collected_images: + final_text.append("\n### Images returned by tools\n") + for idx, img in enumerate(collected_images, 1): + data = img.get("data") + if isinstance(data, str): + b64 = data + else: + b64 = base64.b64encode(data).decode() + mime = img.get("mime_type", "image/png") + final_text.append(f"![tool-image-{idx}](data:{mime};base64,{b64})") + + return "\n".join(final_text) + + async def process_prompt(self, prompt: str, options: Dict[str, Any]) -> Dict[str, Any]: + """Process a prompt using the MCP server and LLM""" + try: + config = options.get('config', {}) + + # Initialize LLM provider + self._initialize_llm_provider(config) + + # Get MCP server configuration + mcp_config = config.get('mcp', {}) + server_config = mcp_config.get('server', {}) + + if isinstance(server_config, list): + server_config = server_config[0] + + # Connect to server + await self.connect_to_server(server_config) + + # Get debug flag from config (default to False) + debug = config.get('debug', False) + + # Process with LLM + if self.llm_provider: + output = await self.process_with_llm(prompt, debug) + else: + output = "Error: No LLM provider available" + + return {"output": output} + + except Exception as e: + import traceback + return { + "output": f"Error: {str(e)}\n\nTraceback:\n{traceback.format_exc()}" + } + finally: + await self.cleanup() + + async def cleanup(self): + """Clean up resources""" + await self.exit_stack.aclose() + +''' + +promptfoo eval -c eval/test_general.yaml --no-cache + +''' diff --git a/napari_mcp_evals/start_napari.py b/napari_mcp_evals/start_napari.py new file mode 100644 index 0000000000000000000000000000000000000000..ab36cdb99d683d9712d8e8847decae3f58cc7f6d --- /dev/null +++ b/napari_mcp_evals/start_napari.py @@ -0,0 +1,16 @@ +import napari + +viewer = napari.Viewer() + +# 1) open your plugin's dock widget at startup +# (use plugin package name and the widget name from its manifest) +dock, widget = viewer.window.add_plugin_dock_widget( + plugin_name="napari-socket", + widget_name="Socket Server", +) + +# 2) automatically start the socket server +if hasattr(widget, "_start"): + widget._start() + +napari.run() \ No newline at end of file diff --git a/napari_mcp_evals/tasks/0_actions/eval_basic_napari_functions.yaml b/napari_mcp_evals/tasks/0_actions/eval_basic_napari_functions.yaml new file mode 100644 index 0000000000000000000000000000000000000000..036ee8c1acb2697f7cef18741e32470dcd806ab9 --- /dev/null +++ b/napari_mcp_evals/tasks/0_actions/eval_basic_napari_functions.yaml @@ -0,0 +1,423 @@ +# Basic Napari Function Tests - Action Level +# These tests evaluate individual napari MCP server functions with simple, single-function calls +# Each test focuses on testing one specific function with appropriate parameters + +# Test 1: open_file - Load a single image file +- vars: + question: | + Use the open_file function to load the image file "D:/Development/napari-mcp/eval/data/SNAP_IgM_BCR_Cell_1/input/SNAP_IgM_BCR_Cell_1_ch0_t14.tif". + Respond with <1> if the file was successfully loaded, or <0> if it failed. Only respond with <1> or <0>. + assert: + - type: contains-all + value: "<1>" + - type: not-contains + value: "<0>" + options: + cache: false + runSerially: true + +# Test 2: list_layers - Get information about loaded layers +- vars: + question: | + Use the list_layers function to get information about all currently loaded layers. + Respond with <1> if you successfully retrieved layer information, or <0> if it failed. Only respond with <1> or <0>. + assert: + - type: contains-all + value: "<1>" + - type: not-contains + value: "<0>" + options: + cache: false + runSerially: true + +# Test 3: set_colormap - Change layer colormap +- vars: + question: | + Use the set_colormap function to change the colormap of the loaded layer to 'viridis'. + Respond with <1> if the colormap was successfully changed, or <0> if it failed. Only respond with <1> or <0>. + assert: + - type: contains-all + value: "<1>" + - type: not-contains + value: "<0>" + options: + cache: false + runSerially: true + +# Test 4: set_opacity - Adjust layer transparency +- vars: + question: | + Use the set_opacity function to set the opacity of the loaded layer to 0.5 (50% transparent). + Respond with <1> if the opacity was successfully changed, or <0> if it failed. Only respond with <1> or <0>. + assert: + - type: contains-all + value: "<1>" + - type: not-contains + value: "<0>" + options: + cache: false + runSerially: true + +# Test 5: set_blending - Change layer blending mode +- vars: + question: | + Use the set_blending function to change the blending mode of the loaded layer to 'additive'. + Respond with <1> if the blending mode was successfully changed, or <0> if it failed. Only respond with <1> or <0>. + assert: + - type: contains-all + value: "<1>" + - type: not-contains + value: "<0>" + options: + cache: false + runSerially: true + +# Test 6: auto_contrast - Auto-adjust contrast +- vars: + question: | + Use the auto_contrast function to automatically adjust the contrast of the loaded layer. + Respond with <1> if the contrast was successfully auto-adjusted, or <0> if it failed. Only respond with <1> or <0>. + assert: + - type: contains-all + value: "<1>" + - type: not-contains + value: "<0>" + options: + cache: false + runSerially: true + +# Test 7: set_contrast_limits - Set specific contrast limits +- vars: + question: | + Use the set_contrast_limits function to set the contrast limits of the loaded layer to min=0.1 and max=0.9. + Respond with <1> if the contrast limits were successfully set, or <0> if it failed. Only respond with <1> or <0>. + assert: + - type: contains-all + value: "<1>" + - type: not-contains + value: "<0>" + options: + cache: false + runSerially: true + +# Test 8: set_gamma - Adjust gamma correction +- vars: + question: | + Use the set_gamma function to set the gamma correction of the loaded layer to 1.5. + Respond with <1> if the gamma was successfully adjusted, or <0> if it failed. Only respond with <1> or <0>. + assert: + - type: contains-all + value: "<1>" + - type: not-contains + value: "<0>" + options: + cache: false + runSerially: true + +# Test 9: set_interpolation - Change interpolation mode +- vars: + question: | + Use the set_interpolation function to change the interpolation mode of the loaded layer to 'linear'. + Respond with <1> if the interpolation mode was successfully changed, or <0> if it failed. Only respond with <1> or <0>. + assert: + - type: contains-all + value: "<1>" + - type: not-contains + value: "<0>" + options: + cache: false + runSerially: true + +# Test 10: toggle_view - Switch between 2D and 3D view +- vars: + question: | + Use the toggle_view function to switch the view to 3D mode. + Respond with <1> if the view was successfully switched to 3D, or <0> if it failed. Only respond with <1> or <0>. + assert: + - type: contains-all + value: "<1>" + - type: not-contains + value: "<0>" + options: + cache: false + runSerially: true + +# Test 11: get_dims_info - Get dimension information +- vars: + question: | + Use the get_dims_info function to get information about the viewer's dimensions. + Respond with <1> if you successfully retrieved dimension information, or <0> if it failed. Only respond with <1> or <0>. + assert: + - type: contains-all + value: "<1>" + - type: not-contains + value: "<0>" + options: + cache: false + runSerially: true + +# Test 12: get_camera - Get current camera settings +- vars: + question: | + Use the get_camera function to get the current camera settings. + Respond with <1> if you successfully retrieved camera settings, or <0> if it failed. Only respond with <1> or <0>. + assert: + - type: contains-all + value: "<1>" + - type: not-contains + value: "<0>" + options: + cache: false + runSerially: true + +# Test 13: reset_camera - Reset camera to default view +- vars: + question: | + Use the reset_camera function to reset the camera to the default view. + Respond with <1> if the camera was successfully reset, or <0> if it failed. Only respond with <1> or <0>. + assert: + - type: contains-all + value: "<1>" + - type: not-contains + value: "<0>" + options: + cache: false + runSerially: true + +# Test 14: set_camera - Adjust camera settings +- vars: + question: | + Use the set_camera function to set the zoom to 2.0. + Respond with <1> if the camera zoom was successfully set, or <0> if it failed. Only respond with <1> or <0>. + assert: + - type: contains-all + value: "<1>" + - type: not-contains + value: "<0>" + options: + cache: false + runSerially: true + +# Test 15: set_layer_visibility - Toggle layer visibility +- vars: + question: | + Use the set_layer_visibility function to hide the loaded layer (set visible to false). + Respond with <1> if the layer visibility was successfully changed, or <0> if it failed. Only respond with <1> or <0>. + assert: + - type: contains-all + value: "<1>" + - type: not-contains + value: "<0>" + options: + cache: false + runSerially: true + +# Test 16: set_layer_visibility - Show layer again +- vars: + question: | + Use the set_layer_visibility function to show the loaded layer again (set visible to true). + Respond with <1> if the layer visibility was successfully changed, or <0> if it failed. Only respond with <1> or <0>. + assert: + - type: contains-all + value: "<1>" + - type: not-contains + value: "<0>" + options: + cache: false + runSerially: true + +# Test 17: screenshot - Take a screenshot +- vars: + question: | + Use the screenshot function to take a screenshot of the current view. + Respond with <1> if the screenshot was successfully taken, or <0> if it failed. Only respond with <1> or <0>. + assert: + - type: contains-all + value: "<1>" + - type: not-contains + value: "<0>" + options: + cache: false + runSerially: true + +# Test 18: get_layer_statistics - Get layer statistics +- vars: + question: | + Use the get_layer_statistics function to get basic statistics (min, max, mean, std) for the loaded layer. + Respond with <1> if you successfully retrieved layer statistics, or <0> if it failed. Only respond with <1> or <0>. + assert: + - type: contains-all + value: "<1>" + - type: not-contains + value: "<0>" + options: + cache: false + runSerially: true + +# Test 19: get_layer_data - Extract layer data +- vars: + question: | + Use the get_layer_data function to extract the raw data from the loaded layer. + Respond with <1> if you successfully extracted layer data, or <0> if it failed. Only respond with <1> or <0>. + assert: + - type: contains-all + value: "<1>" + - type: not-contains + value: "<0>" + options: + cache: false + runSerially: true + +# Test 20: add_points - Add point annotations +- vars: + question: | + Use the add_points function to add two point markers at coordinates [[100, 100], [200, 200]] with the name "test_points". + Respond with <1> if the points were successfully added, or <0> if it failed. Only respond with <1> or <0>. + assert: + - type: contains-all + value: "<1>" + - type: not-contains + value: "<0>" + options: + cache: false + runSerially: true + +# Test 21: add_shapes - Add shape annotations +- vars: + question: | + Use the add_shapes function to add a rectangle shape with coordinates [[[50, 50], [150, 50], [150, 150], [50, 150]]] and name "test_rectangle". + Respond with <1> if the shape was successfully added, or <0> if it failed. Only respond with <1> or <0>. + assert: + - type: contains-all + value: "<1>" + - type: not-contains + value: "<0>" + options: + cache: false + runSerially: true + +# Test 22: measure_distance - Measure distance between points +- vars: + question: | + Use the measure_distance function to measure the distance between point [100, 100] and point [200, 200]. + Respond with <1> if the distance was successfully measured, or <0> if it failed. Only respond with <1> or <0>. + assert: + - type: contains-all + value: "<1>" + - type: not-contains + value: "<0>" + options: + cache: false + runSerially: true + +# Test 23: set_scale_bar - Show scale bar +- vars: + question: | + Use the set_scale_bar function to show the scale bar with unit 'um'. + Respond with <1> if the scale bar was successfully shown, or <0> if it failed. Only respond with <1> or <0>. + assert: + - type: contains-all + value: "<1>" + - type: not-contains + value: "<0>" + options: + cache: false + runSerially: true + +# Test 24: set_axis_labels - Set axis labels +- vars: + question: | + Use the set_axis_labels function to set axis labels to ['y', 'x'] for the 2D data. + Respond with <1> if the axis labels were successfully set, or <0> if it failed. Only respond with <1> or <0>. + assert: + - type: contains-all + value: "<1>" + - type: not-contains + value: "<0>" + options: + cache: false + runSerially: true + +# Test 25: export_screenshot - Save screenshot to file +- vars: + question: | + Use the export_screenshot function to save a screenshot to "test_screenshot.jpg". + Respond with <1> if the screenshot was successfully exported, or <0> if it failed. Only respond with <1> or <0>. + assert: + - type: contains-all + value: "<1>" + - type: not-contains + value: "<0>" + options: + cache: false + runSerially: true + +# Test 26: save_layers - Save layer to file +- vars: + question: | + Use the save_layers function to save the loaded layer to "test_layer.tif". + Respond with <1> if the layer was successfully saved, or <0> if it failed. Only respond with <1> or <0>. + assert: + - type: contains-all + value: "<1>" + - type: not-contains + value: "<0>" + options: + cache: false + runSerially: true + +# Test 27: remove_layer - Remove a layer +- vars: + question: | + Use the remove_layer function to remove the "test_points" layer. + Respond with <1> if the layer was successfully removed, or <0> if it failed. Only respond with <1> or <0>. + assert: + - type: contains-all + value: "<1>" + - type: not-contains + value: "<0>" + options: + cache: false + runSerially: true + +# Test 28: Error handling - Try to load non-existent file +- vars: + question: | + Use the open_file function to try to load a non-existent file "nonexistent.tif". + Respond with <1> if the error was handled gracefully (no crash), or <0> if it crashed. Only respond with <1> or <0>. + assert: + - type: contains-all + value: "<1>" + - type: not-contains + value: "<0>" + options: + cache: false + runSerially: true + +# Test 29: Error handling - Try to remove non-existent layer +- vars: + question: | + Use the remove_layer function to try to remove a layer that doesn't exist "nonexistent_layer". + Respond with <1> if the error was handled gracefully (no crash), or <0> if it crashed. Only respond with <1> or <0>. + assert: + - type: contains-all + value: "<1>" + - type: not-contains + value: "<0>" + options: + cache: false + runSerially: true + +# Test 30: Cleanup - Remove remaining test layers +- vars: + question: | + Use the remove_layer function to remove the "test_rectangle" layer to clean up test annotations. + Respond with <1> if the layer was successfully removed, or <0> if it failed. Only respond with <1> or <0>. + assert: + - type: contains-all + value: "<1>" + - type: not-contains + value: "<0>" + options: + cache: false + runSerially: true \ No newline at end of file diff --git a/napari_mcp_evals/tasks/1_workflows/eval_analysis_workflows.yaml b/napari_mcp_evals/tasks/1_workflows/eval_analysis_workflows.yaml new file mode 100644 index 0000000000000000000000000000000000000000..ebfe3216a2e749119f7ab191a2e0d75322028874 --- /dev/null +++ b/napari_mcp_evals/tasks/1_workflows/eval_analysis_workflows.yaml @@ -0,0 +1,118 @@ +# Analysis Workflow Tests for napari-mcp +# These tests evaluate complex analysis workflows that combine multiple napari functions +# Each test focuses on performing specific analysis tasks + +# Test 1: Cell Counting and Measurement Analysis +- vars: + question: | + Load the image "D:/Development/napari-mcp/eval/data/SNAP_IgM_BCR_Cell_1/input/SNAP_IgM_BCR_Cell_1_ch1_t14.tif" and set it to magenta colormap. + Take a screenshot and analyze it to count how many complete cells are visible (not cut off by edges). + Add point annotations to mark the center of each counted cell. + Measure the distance between the two most distant cells. + Respond with the number of complete cells you counted, for example "5" if you see 5 complete cells. + assert: + - type: llm-rubric + value: + - It counted 2 complete cells + options: + cache: false + runSerially: true + +# Test 2: Multi-dimensional Data Exploration +- vars: + question: | + Load the multi-dimensional image "D:/Development/napari-mcp/eval/data/SNAP_IgM_BCR_Cell_1/input/SNAP_IgM_BCR_Cell_1_ch0_t14.tif". + Get dimension information to understand the data structure. + Navigate through different z-slices to examine structures at different depths. + Take screenshots at 3 different z-slices to show structural changes. + Respond with <1> if you successfully explored the multi-dimensional data and observed structural changes, or <0> if it failed. Only respond with <1> or <0>. + assert: + - type: contains-all + value: "<1>" + - type: not-contains + value: "<0>" + options: + cache: false + runSerially: true + +# Test 3: Statistical Analysis and Data Export +- vars: + question: | + Get basic statistics (min, max, mean, std) for the loaded layer. + Extract the raw layer data and examine its properties. + Save the current layer to a file for further analysis. + Export a screenshot of the current view for documentation. + Respond with <1> if the statistical analysis and data export were successful, or <0> if it failed. Only respond with <1> or <0>. + assert: + - type: contains-all + value: "<1>" + - type: not-contains + value: "<0>" + options: + cache: false + runSerially: true + +# Test 4: Annotation and Measurement Workflow +- vars: + question: | + Add point annotations to mark specific features of interest in the image. + Add shape annotations (rectangles or circles) to highlight regions of interest. + Measure distances between multiple pairs of points. + Take a screenshot showing all annotations and measurements. + Respond with <1> if the annotation and measurement workflow was successful, or <0> if it failed. Only respond with <1> or <0>. + assert: + - type: contains-all + value: "<1>" + - type: not-contains + value: "<0>" + options: + cache: false + runSerially: true + +# Test 5: Time Series Analysis (if applicable) +- vars: + question: | + If the data has time dimensions, navigate through different time points. + Compare cellular structures between different time points. + Take screenshots at different time points to show temporal changes. + If no time dimension exists, simulate time series analysis by adjusting the current view and taking multiple screenshots. + Respond with <1> if the time series analysis was successful, or <0> if it failed. Only respond with <1> or <0>. + assert: + - type: contains-all + value: "<1>" + - type: not-contains + value: "<0>" + options: + cache: false + runSerially: true + +# Test 6: Data Cropping and Region of Interest Analysis +- vars: + question: | + Define a region of interest by cropping the layer to a specific area. + Analyze the cropped region separately from the full dataset. + Compare statistics between the full dataset and the cropped region. + Take screenshots of both the full view and the cropped region. + Respond with <1> if the cropping and region analysis was successful, or <0> if it failed. Only respond with <1> or <0>. + assert: + - type: contains-all + value: "<1>" + - type: not-contains + value: "<0>" + options: + cache: false + runSerially: true + +# Test 7: Cleanup - Reset for next test run +- vars: + question: | + Delete all loaded layers and remove any annotations to prepare for the next test run. + Respond with <1> if all layers and annotations were successfully removed, or <0> if it failed. Only respond with <1> or <0>. + assert: + - type: contains-all + value: "<1>" + - type: not-contains + value: "<0>" + options: + cache: false + runSerially: true diff --git a/napari_mcp_evals/tasks/1_workflows/eval_camera_control_workflows.yaml b/napari_mcp_evals/tasks/1_workflows/eval_camera_control_workflows.yaml new file mode 100644 index 0000000000000000000000000000000000000000..b503704a49c896cfd49d01c97ac9bd71c23fde51 --- /dev/null +++ b/napari_mcp_evals/tasks/1_workflows/eval_camera_control_workflows.yaml @@ -0,0 +1,220 @@ +# Camera Control Workflow Tests for napari-mcp +# These tests evaluate camera control and navigation workflows in napari +# Each test focuses on specific camera operations and view control scenarios + +# Test 1: Basic Camera Operations - Reset, Get, and Set +- vars: + question: | + Load the image "D:/Development/napari-mcp/eval/data/SNAP_IgM_BCR_Cell_1/input/SNAP_IgM_BCR_Cell_1_ch0_t14.tif". + Get the current camera settings to understand the initial state. + Reset the camera to the default view. + Get the camera settings again to verify the reset. + Take a screenshot to verify the default view. + Respond with <1> if the basic camera operations were successful, or <0> if it failed. Only respond with <1> or <0>. + assert: + - type: contains-all + value: "<1>" + - type: not-contains + value: "<0>" + options: + cache: false + runSerially: true + +# Test 2: Zoom Control Workflow +- vars: + question: | + Start with the default camera view and take an initial screenshot. + Set the camera zoom to 2.0x magnification. + Take a screenshot to verify the zoom in. + Set the camera zoom to 0.5x magnification. + Take a screenshot to verify the zoom out. + Set the camera zoom back to 1.0x. + Take a final screenshot to verify the zoom reset. + Respond with <1> if all zoom operations were successful, or <0> if any failed. Only respond with <1> or <0>. + assert: + - type: contains-all + value: "<1>" + - type: not-contains + value: "<0>" + options: + cache: false + runSerially: true + +# Test 3: Camera Center Positioning +- vars: + question: | + Get the current camera center position. + Set the camera center to a new position [100, 100]. + Take a screenshot to verify the center change. + Set the camera center to another position [200, 200]. + Take a screenshot to verify the second center change. + Reset the camera to restore the original center. + Take a final screenshot to verify the reset. + Respond with <1> if all camera center operations were successful, or <0> if any failed. Only respond with <1> or <0>. + assert: + - type: contains-all + value: "<1>" + - type: not-contains + value: "<0>" + options: + cache: false + runSerially: true + +# Test 4: 3D Camera Rotation and Angles +- vars: + question: | + Switch to 3D view mode. + Get the current camera settings to see the 3D parameters. + Set camera angles to [30, 45, 0] degrees for x, y, z rotation. + Take a screenshot to verify the 3D rotation. + Set camera angles to [60, 90, 15] degrees for a different view. + Take a screenshot to verify the second rotation. + Reset the camera to restore the default 3D view. + Take a final screenshot to verify the 3D reset. + Respond with <1> if all 3D camera operations were successful, or <0> if any failed. Only respond with <1> or <0>. + assert: + - type: contains-all + value: "<1>" + - type: not-contains + value: "<0>" + options: + cache: false + runSerially: true + +# Test 5: Combined Camera Transformations +- vars: + question: | + Apply multiple camera transformations simultaneously: set center to [150, 150], zoom to 1.5x, and in 3D mode set angles to [45, 30, 0]. + Take a screenshot to verify the combined transformation. + Apply a different combination: center [250, 250], zoom 0.8x, angles [60, 45, 10]. + Take a screenshot to verify the second combination. + Reset the camera to restore all default settings. + Take a final screenshot to verify the complete reset. + Respond with <1> if all combined camera transformations were successful, or <0> if any failed. Only respond with <1> or <0>. + assert: + - type: contains-all + value: "<1>" + - type: not-contains + value: "<0>" + options: + cache: false + runSerially: true + +# Test 6: Camera Navigation Sequence +- vars: + question: | + Perform a camera navigation sequence: start with default view, zoom in to 2.5x, move center to [100, 100], then switch to 3D and rotate to [30, 60, 0]. + Take a screenshot at each step to document the navigation. + Continue the sequence: rotate to [60, 90, 15], zoom out to 1.2x, move center to [200, 200]. + Take screenshots to document these steps. + End with a camera reset to default view. + Take a final screenshot to verify the complete navigation sequence. + Respond with <1> if the complete camera navigation sequence was successful, or <0> if any step failed. Only respond with <1> or <0>. + assert: + - type: contains-all + value: "<1>" + - type: not-contains + value: "<0>" + options: + cache: false + runSerially: true + +# Test 7: View Mode Switching with Camera +- vars: + question: | + Start in 2D view mode and set camera zoom to 2.0x. + Take a screenshot to verify 2D view with zoom. + Switch to 3D view mode and set camera angles to [45, 45, 0]. + Take a screenshot to verify 3D view with rotation. + Switch back to 2D view mode. + Take a screenshot to verify return to 2D. + Reset the camera to default settings. + Take a final screenshot to verify the complete reset. + Respond with <1> if all view mode switching with camera control was successful, or <0> if any step failed. Only respond with <1> or <0>. + assert: + - type: contains-all + value: "<1>" + - type: not-contains + value: "<0>" + options: + cache: false + runSerially: true + +# Test 8: Camera State Persistence +- vars: + question: | + Set up a specific camera configuration: center [175, 175], zoom 1.8x, 3D angles [35, 55, 5]. + Get the camera settings to verify the configuration. + Take a screenshot to document the setup. + Perform some other operations (like changing layer opacity or colormap). + Get the camera settings again to verify they haven't changed. + Take another screenshot to verify the camera state persisted. + Respond with <1> if the camera state persisted through other operations, or <0> if it changed unexpectedly. Only respond with <1> or <0>. + assert: + - type: contains-all + value: "<1>" + - type: not-contains + value: "<0>" + options: + cache: false + runSerially: true + +# Test 9: Camera Boundary Testing +- vars: + question: | + Test camera with extreme values: set zoom to 10.0x (very high zoom). + Take a screenshot to verify extreme zoom. + Set zoom to 0.1x (very low zoom). + Take a screenshot to verify extreme zoom out. + Set camera center to [0, 0] (corner position). + Take a screenshot to verify corner positioning. + Set camera center to [500, 500] (far from origin). + Take a screenshot to verify far positioning. + Reset camera to restore normal settings. + Take a final screenshot to verify recovery from extreme values. + Respond with <1> if the camera handled extreme values gracefully, or <0> if it failed or crashed. Only respond with <1> or <0>. + assert: + - type: contains-all + value: "<1>" + - type: not-contains + value: "<0>" + options: + cache: false + runSerially: true + +# Test 10: Multi-layer Camera Control +- vars: + question: | + Load a second image "D:/Development/napari-mcp/eval/data/SNAP_IgM_BCR_Cell_1/input/SNAP_IgM_BCR_Cell_1_ch1_t14.tif". + Set up a camera view that shows both layers effectively: center [150, 150], zoom 1.5x. + Take a screenshot to verify the multi-layer view. + Adjust camera to focus on layer 1: center [100, 100], zoom 2.0x. + Take a screenshot to verify layer 1 focus. + Adjust camera to focus on layer 2: center [200, 200], zoom 2.0x. + Take a screenshot to verify layer 2 focus. + Reset camera to show both layers again. + Take a final screenshot to verify the balanced multi-layer view. + Respond with <1> if the multi-layer camera control was successful, or <0> if any step failed. Only respond with <1> or <0>. + assert: + - type: contains-all + value: "<1>" + - type: not-contains + value: "<0>" + options: + cache: false + runSerially: true + +# Test 11: Cleanup - Reset for next test run +- vars: + question: | + Delete all loaded layers and reset the camera to default settings. + Switch back to 2D view mode. + Respond with <1> if the cleanup was successful, or <0> if it failed. Only respond with <1> or <0>. + assert: + - type: contains-all + value: "<1>" + - type: not-contains + value: "<0>" + options: + cache: false + runSerially: true diff --git a/napari_mcp_evals/tasks/1_workflows/eval_complex_visualization.yaml b/napari_mcp_evals/tasks/1_workflows/eval_complex_visualization.yaml new file mode 100644 index 0000000000000000000000000000000000000000..a8e92018e52eea20a7fd32262e07af3b3abe4ba1 --- /dev/null +++ b/napari_mcp_evals/tasks/1_workflows/eval_complex_visualization.yaml @@ -0,0 +1,162 @@ +# Advanced Visualization and Rendering Tests for napari-mcp +# These tests evaluate advanced rendering techniques and complex visualization setups +# Focus: 3D rendering, iso-surfaces, volume rendering, MIPs, and sophisticated visualization scenarios + +# Test 1: 3D View and Iso-surface Rendering +- vars: + question: | + Load the image "D:/Development/napari-mcp/eval/data/SNAP_IgM_BCR_Cell_1/input/SNAP_IgM_BCR_Cell_1_ch0_t14.tif". + Switch to 3D view mode. + Enable iso-surface rendering for the loaded layer. + Take a screenshot to verify the 3D iso-surface rendering. + Respond with <1> if the 3D iso-surface rendering was successfully set up, or <0> if it failed. Only respond with <1> or <0>. + assert: + - type: contains-all + value: "<1>" + - type: not-contains + value: "<0>" + options: + cache: false + runSerially: true + +# Test 2: MIP (Maximum Intensity Projection) with Multi-channel +- vars: + question: | + Load the second channel "D:/Development/napari-mcp/eval/data/SNAP_IgM_BCR_Cell_1/input/SNAP_IgM_BCR_Cell_1_ch1_t14.tif". + Switch to 3D view and create Maximum Intensity Projections (MIPs) for both channels. + Set channel 0 to green colormap and channel 1 to magenta colormap. + Use additive blending to combine the channels in MIP mode. + Take a screenshot of the MIP result. + Analyze the screenshot and respond with <1> if both colors are present in the MIP, or <0> otherwise. Only respond with <1> or <0>. + assert: + - type: contains-all + value: "<1>" + - type: not-contains + value: "<0>" + options: + cache: false + runSerially: true + +# Test 3: Volume Rendering with Surface Conversion +- vars: + question: | + Switch to volume rendering mode for the 3D data. + Take a screenshot to verify the volume rendering is active. + Convert the volume data to surface rendering using iso-surface. + Take another screenshot to verify the surface rendering. + Respond with <1> if both volume and surface rendering were successfully created, or <0> if it failed. Only respond with <1> or <0>. + assert: + - type: contains-all + value: "<1>" + - type: not-contains + value: "<0>" + options: + cache: false + runSerially: true + +# Test 4: Complex Multi-layer 3D Visualization +- vars: + question: | + Create a complex 3D visualization setup: set different blending modes for each layer (translucent for layer 1, additive for layer 2). + Adjust opacities to 0.6 for layer 1 and 0.8 for layer 2. + Apply different colormaps: 'hot' for layer 1 and 'cool' for layer 2. + Set different interpolation modes: 'linear' for layer 1 and 'cubic' for layer 2. + Enable iso-surface rendering for both layers. + Take a screenshot showing the complex multi-layer 3D setup. + Respond with <1> if the complex 3D visualization setup was successful, or <0> if it failed. Only respond with <1> or <0>. + assert: + - type: contains-all + value: "<1>" + - type: not-contains + value: "<0>" + options: + cache: false + runSerially: true + +# Test 5: Advanced 3D Camera Control and Navigation +- vars: + question: | + Start with the default 3D view and take an initial screenshot. + Rotate the camera to show the 3D data from a different perspective (side view). + Take a screenshot to verify the 3D camera rotation. + Zoom in on the 3D structures so they appear larger in the viewport. + Take a screenshot to verify the 3D zoom. + Pan the camera to move the 3D view to show a different region. + Take a screenshot to verify the 3D pan. + Reset the camera to the default 3D view. + Take a final screenshot to verify the 3D reset. + Respond with <1> if all 3D camera operations were successful, or <0> if any failed. Only respond with <1> or <0>. + assert: + - type: contains-all + value: "<1>" + - type: not-contains + value: "<0>" + options: + cache: false + runSerially: true + +# Test 6: Iso-surface Threshold Adjustment +- vars: + question: | + Adjust the iso-surface threshold to different values to explore different surface levels. + Start with a low threshold (e.g., 0.1) and take a screenshot. + Increase the threshold to a medium value (e.g., 0.5) and take a screenshot. + Increase the threshold to a high value (e.g., 0.9) and take a screenshot. + Respond with <1> if you successfully adjusted iso-surface thresholds and could see different surface levels, or <0> if it failed. Only respond with <1> or <0>. + assert: + - type: contains-all + value: "<1>" + - type: not-contains + value: "<0>" + options: + cache: false + runSerially: true + +# Test 7: Performance Test with 3D Rendering +- vars: + question: | + Load a large 3D image file and measure the time it takes to load. + Switch to 3D view and enable iso-surface rendering. + Apply various 3D visualization settings (colormap, contrast, blending, iso-surface threshold). + Take a screenshot to verify the 3D rendering performance. + Respond with <1> if the large 3D file loaded successfully and 3D rendering was responsive, or <0> if it failed. Only respond with <1> or <0>. + assert: + - type: contains-all + value: "<1>" + - type: not-contains + value: "<0>" + options: + cache: false + runSerially: true + +# Test 8: Multi-channel 3D Overlay with Different Rendering Modes +- vars: + question: | + Set up a multi-channel 3D visualization: channel 0 with iso-surface rendering, channel 1 with volume rendering. + Apply different colormaps and blending modes to each channel. + Adjust the camera to show both rendering modes effectively. + Take a screenshot showing the combined 3D rendering modes. + Respond with <1> if the multi-channel 3D overlay with different rendering modes was successful, or <0> if it failed. Only respond with <1> or <0>. + assert: + - type: contains-all + value: "<1>" + - type: not-contains + value: "<0>" + options: + cache: false + runSerially: true + +# Test 9: Cleanup - Reset for next test run +- vars: + question: | + Delete all loaded layers and reset the view to 2D mode. + Remove any annotations or measurements. + Respond with <1> if the cleanup was successful, or <0> if it failed. Only respond with <1> or <0>. + assert: + - type: contains-all + value: "<1>" + - type: not-contains + value: "<0>" + options: + cache: false + runSerially: true diff --git a/napari_mcp_evals/tasks/1_workflows/eval_multi_dim_viewing.yaml b/napari_mcp_evals/tasks/1_workflows/eval_multi_dim_viewing.yaml new file mode 100644 index 0000000000000000000000000000000000000000..50dc666b90764783b707201e32841871a5f5cfd8 --- /dev/null +++ b/napari_mcp_evals/tasks/1_workflows/eval_multi_dim_viewing.yaml @@ -0,0 +1,124 @@ +# Multi-dimensional Data Navigation Tests for napari-mcp +# These tests evaluate navigation and exploration of multi-dimensional data (z-stack, time, channels) +# Focus: Data exploration, dimension navigation, and multi-dimensional data handling + +# Test 1: Setup - Load multi-dimensional data +- vars: + question: | + Load the "D:/Development/napari-mcp/eval/data/SNAP_IgM_BCR_Cell_1/input/SNAP_IgM_BCR_Cell_1_ch0_t14.tif" file in Napari. + This is a multi-dimensional image with z-stack, time, and channel dimensions. + Respond with <1> if the file was successfully loaded or <0> if it failed. Only respond with <1> or <0>. + assert: + - type: contains-all + value: "<1>" + - type: not-contains + value: "<0>" + options: + cache: false + runSerially: true + +# Test 2: Dimension Information Retrieval +- vars: + question: | + Get dimension information to understand the data structure (z-stack, time, channels). + Examine the number of steps in each dimension and current positions. + Respond with <1> if you successfully retrieved dimension information, or <0> if it failed. Only respond with <1> or <0>. + assert: + - type: contains-all + value: "<1>" + - type: not-contains + value: "<0>" + options: + cache: false + runSerially: true + +# Test 3: Z-stack Navigation - Scroll through different depths +- vars: + question: | + Navigate through the z-stack of the loaded image. Use set_z_slice to jump to at least 3 different z-slices to examine structures at different depths. + Take a screenshot at each z-slice to verify navigation. + Respond with <1> if you successfully navigated through different z-slices and could see structural changes, or <0> if it failed. Only respond with <1> or <0>. + assert: + - type: contains-all + value: "<1>" + - type: not-contains + value: "<0>" + options: + cache: false + runSerially: true + +# Test 4: Channel Navigation - Switch between channels +- vars: + question: | + Load the second channel "D:/Development/napari-mcp/eval/data/SNAP_IgM_BCR_Cell_1/input/SNAP_IgM_BCR_Cell_1_ch1_t14.tif". + Use set_channel to switch between channel 0 and channel 1. + Take screenshots showing each channel separately. + Respond with <1> if you successfully navigated between channels, or <0> if it failed. Only respond with <1> or <0>. + assert: + - type: contains-all + value: "<1>" + - type: not-contains + value: "<0>" + options: + cache: false + runSerially: true + +# Test 5: Time Series Navigation - Navigate through time points +- vars: + question: | + If the data has time dimensions, use set_timestep to navigate through different time points. + Take screenshots at different time points to show temporal changes. + If no time dimension exists, simulate time navigation by adjusting the current view and taking multiple screenshots. + Respond with <1> if you successfully navigated through time points or simulated time navigation, or <0> if it failed. Only respond with <1> or <0>. + assert: + - type: contains-all + value: "<1>" + - type: not-contains + value: "<0>" + options: + cache: false + runSerially: true + +# Test 6: Multi-dimensional Data Exploration +- vars: + question: | + Combine navigation across multiple dimensions: switch to a specific z-slice, then to a specific channel, then to a specific time point. + Take screenshots to document the multi-dimensional exploration. + Respond with <1> if you successfully explored the data across multiple dimensions, or <0> if it failed. Only respond with <1> or <0>. + assert: + - type: contains-all + value: "<1>" + - type: not-contains + value: "<0>" + options: + cache: false + runSerially: true + +# Test 7: Dimension Labeling and Organization +- vars: + question: | + Set appropriate axis labels for the multi-dimensional data (e.g., ['t', 'z', 'c', 'y', 'x'] for time, z-stack, channel, y, x). + Take a screenshot showing the labeled dimensions. + Respond with <1> if you successfully set dimension labels, or <0> if it failed. Only respond with <1> or <0>. + assert: + - type: contains-all + value: "<1>" + - type: not-contains + value: "<0>" + options: + cache: false + runSerially: true + +# Test 8: Cleanup - Reset for next test run +- vars: + question: | + Delete all loaded layers and reset the view to prepare for the next test run. + Respond with <1> if all layers were successfully deleted and the view was reset, or <0> if it failed. Only respond with <1> or <0>. + assert: + - type: contains-all + value: "<1>" + - type: not-contains + value: "<0>" + options: + cache: false + runSerially: true \ No newline at end of file diff --git a/napari_mcp_evals/tasks/1_workflows/eval_visualization_workflows.yaml b/napari_mcp_evals/tasks/1_workflows/eval_visualization_workflows.yaml new file mode 100644 index 0000000000000000000000000000000000000000..c4f326c6ab198725c11ab56bcc6c288b4229da4b --- /dev/null +++ b/napari_mcp_evals/tasks/1_workflows/eval_visualization_workflows.yaml @@ -0,0 +1,135 @@ +# Basic Visualization Workflow Tests for napari-mcp +# These tests evaluate basic visualization techniques and layer management +# Focus: Colormaps, blending modes, opacity, contrast, and basic layer operations + +# Test 1: Multi-channel Overlay with Colormaps +- vars: + question: | + Load the first image "D:/Development/napari-mcp/eval/data/SNAP_IgM_BCR_Cell_1/input/SNAP_IgM_BCR_Cell_1_ch0_t14.tif" and then load the second image "D:/Development/napari-mcp/eval/data/SNAP_IgM_BCR_Cell_1/input/SNAP_IgM_BCR_Cell_1_ch1_t14.tif". + Set the first channel to green colormap and the second channel to magenta colormap. + Use additive blending for both channels to create an overlay visualization. + Take a screenshot of the result. + Respond with <1> if both channels are visible with their respective colors in the overlay, or <0> if it failed. Only respond with <1> or <0>. + assert: + - type: contains-all + value: "<1>" + - type: not-contains + value: "<0>" + options: + cache: false + runSerially: true + +# Test 2: Colormap Variety and Selection +- vars: + question: | + Apply different colormaps to each channel: set channel 0 to 'viridis' and channel 1 to 'plasma'. + Then change channel 0 to 'hot' and channel 1 to 'cool'. + Take screenshots showing both colormap combinations. + Respond with <1> if you successfully applied different colormaps to each channel, or <0> if it failed. Only respond with <1> or <0>. + assert: + - type: contains-all + value: "<1>" + - type: not-contains + value: "<0>" + options: + cache: false + runSerially: true + +# Test 3: Contrast and Gamma Adjustment +- vars: + question: | + Auto-adjust contrast for both channels using auto_contrast. + Then manually fine-tune the contrast limits: set channel 0 to [0.1, 0.9] and channel 1 to [0.2, 0.8]. + Adjust gamma correction to 1.2 for channel 0 and 0.8 for channel 1. + Take a screenshot showing the enhanced visualization. + Respond with <1> if the contrast and gamma adjustments were successful, or <0> if it failed. Only respond with <1> or <0>. + assert: + - type: contains-all + value: "<1>" + - type: not-contains + value: "<0>" + options: + cache: false + runSerially: true + +# Test 4: Blending Mode Variations +- vars: + question: | + Test different blending modes: set channel 0 to 'opaque', channel 1 to 'translucent'. + Take a screenshot to show this combination. + Change to: channel 0 to 'additive', channel 1 to 'minimum'. + Take a screenshot to show this combination. + Respond with <1> if you successfully tested different blending modes, or <0> if it failed. Only respond with <1> or <0>. + assert: + - type: contains-all + value: "<1>" + - type: not-contains + value: "<0>" + options: + cache: false + runSerially: true + +# Test 5: Layer Opacity and Visibility Management +- vars: + question: | + Set the opacity of channel 0 to 0.3 and channel 1 to 0.7. + Take a screenshot showing the different opacities. + Toggle the visibility of channel 0 to hide it, then show it again. + Take screenshots showing the visibility changes. + Respond with <1> if the opacity and visibility management was successful, or <0> if it failed. Only respond with <1> or <0>. + assert: + - type: contains-all + value: "<1>" + - type: not-contains + value: "<0>" + options: + cache: false + runSerially: true + +# Test 6: Interpolation and Rendering Quality +- vars: + question: | + Set interpolation to 'nearest' for channel 0 and 'linear' for channel 1. + Take a screenshot showing the different interpolation modes. + Change channel 0 to 'cubic' interpolation. + Take a screenshot showing the cubic interpolation. + Respond with <1> if you successfully tested different interpolation modes, or <0> if it failed. Only respond with <1> or <0>. + assert: + - type: contains-all + value: "<1>" + - type: not-contains + value: "<0>" + options: + cache: false + runSerially: true + +# Test 7: Scale Bar and Display Elements +- vars: + question: | + Show the scale bar with unit 'um' for the visualization. + Take a screenshot showing the scale bar. + Hide the scale bar, then show it again with unit 'nm'. + Take a screenshot showing the scale bar with different units. + Respond with <1> if the scale bar management was successful, or <0> if it failed. Only respond with <1> or <0>. + assert: + - type: contains-all + value: "<1>" + - type: not-contains + value: "<0>" + options: + cache: false + runSerially: true + +# Test 8: Cleanup - Reset for next test run +- vars: + question: | + Delete all loaded layers and reset the view to 2D mode to prepare for the next test run. + Respond with <1> if all layers were successfully deleted and the view was reset, or <0> if it failed. Only respond with <1> or <0>. + assert: + - type: contains-all + value: "<1>" + - type: not-contains + value: "<0>" + options: + cache: false + runSerially: true diff --git a/napari_mcp_evals/tasks/2_scientific_tasks/eval_figure_recreation.yaml b/napari_mcp_evals/tasks/2_scientific_tasks/eval_figure_recreation.yaml new file mode 100644 index 0000000000000000000000000000000000000000..5f810fcfaa7324a8ca921342435cb3f18fe57f93 --- /dev/null +++ b/napari_mcp_evals/tasks/2_scientific_tasks/eval_figure_recreation.yaml @@ -0,0 +1,32 @@ +# Figure Recreation Tests for napari-mcp +# These tests evaluate the agent's ability to recreate scientific figures from papers +# Focus: Loading data, applying appropriate visualization settings, and matching target figures + +# Test 1: Recreate C. elegans Infection Figure from BBBC012 Dataset +- vars: + question: | + Load the three input images from the BBBC012_v1_images dataset: + - "D:/Development/napari-mcp/eval/data/BBBC012_v1_images/input/101210OranePlt2_N13_w1_[723EE621-1D8A-4ED8-9349-884F1342561E].tif" + - "D:/Development/napari-mcp/eval/data/BBBC012_v1_images/input/101210OranePlt2_N13_w2_[6C35EF3F-DA59-4416-8D54-435D195DB70E].tif" + - "D:/Development/napari-mcp/eval/data/BBBC012_v1_images/input/101210OranePlt2_N13_w3_[F8D79BE3-2A04-4775-858C-AF8326D2CC1A].tif" + + Apply appropriate colormaps to each channel to recreate the C. elegans infection figure: + - Channel 1 (w1): Use a colormap that shows the C. elegans worms clearly + - Channel 2 (w2): Use a colormap that shows the infection markers + - Channel 3 (w3): Use a colormap that shows additional cellular structures + + Set appropriate blending modes and opacities to combine the channels effectively. + Adjust contrast and gamma as needed to match the target figure. + Take a screenshot of your recreation. + + Compare your screenshot with the target figure "D:/Development/napari-mcp/eval/data/BBBC012_v1_images/assertions/target_celegans_infection_figure.png". + + Respond with <1> if you successfully recreated the figure and it closely matches the target, or <0> if it failed or doesn't match well. Only respond with <1> or <0>. + assert: + - type: contains-all + value: "<1>" + - type: not-contains + value: "<0>" + options: + cache: false + runSerially: true diff --git a/napari_mcp_evals/tasks/2_scientific_tasks/eval_iso_surface_extraction.yaml b/napari_mcp_evals/tasks/2_scientific_tasks/eval_iso_surface_extraction.yaml new file mode 100644 index 0000000000000000000000000000000000000000..8fdcbf2c7879e4e8bf06acbccb3319ecdcbcb088 --- /dev/null +++ b/napari_mcp_evals/tasks/2_scientific_tasks/eval_iso_surface_extraction.yaml @@ -0,0 +1,211 @@ +# Iso-surface Extraction Tests for napari-mcp +# These tests evaluate iso-surface extraction with adaptive threshold adjustment +# Focus: Visual feedback-based iso-surface threshold optimization and cell segmentation + +# Test 1: Setup - Load cell data and switch to 3D view +- vars: + question: | + Load the image "D:/Development/napari-mcp/eval/data/SNAP_IgM_BCR_Cell_1/input/SNAP_IgM_BCR_Cell_1_ch0_t14.tif". + Switch to 3D view mode to enable iso-surface rendering. + Take a screenshot of the initial 3D view. + Respond with <1> if the 3D view was successfully set up, or <0> if it failed. Only respond with <1> or <0>. + assert: + - type: contains-all + value: "<1>" + - type: not-contains + value: "<0>" + options: + cache: false + runSerially: true + +# Test 2: Initial Iso-surface with Low Threshold +- vars: + question: | + Enable iso-surface rendering with a low threshold value of 0.1. + Take a screenshot to see the initial iso-surface. + Analyze the screenshot and describe what you see - are the cells visible as surfaces? + Respond with <1> if the low threshold iso-surface was applied and you can see surface structures, or <0> if it failed. Only respond with <1> or <0>. + assert: + - type: contains-all + value: "<1>" + - type: not-contains + value: "<0>" + options: + cache: false + runSerially: true + +# Test 3: Medium Threshold Adjustment +- vars: + question: | + Adjust the iso-surface threshold to a medium value of 0.3. + Take a screenshot to see how the iso-surface changes. + Compare this with the previous screenshot - are the cell surfaces more defined now? + Respond with <1> if the medium threshold was applied and the cell surfaces appear more defined, or <0> if it failed. Only respond with <1> or <0>. + assert: + - type: contains-all + value: "<1>" + - type: not-contains + value: "<0>" + options: + cache: false + runSerially: true + +# Test 4: High Threshold for Cell Boundaries +- vars: + question: | + Increase the iso-surface threshold to a high value of 0.6. + Take a screenshot to see the high threshold iso-surface. + Analyze the screenshot - do you see clear cell boundaries and distinct cellular structures? + Respond with <1> if the high threshold shows clear cell boundaries and distinct structures, or <0> if it failed. Only respond with <1> or <0>. + assert: + - type: contains-all + value: "<1>" + - type: not-contains + value: "<0>" + options: + cache: false + runSerially: true + +# Test 5: Very High Threshold for Cell Centers +- vars: + question: | + Set the iso-surface threshold to a very high value of 0.8. + Take a screenshot to see the very high threshold iso-surface. + Analyze the screenshot - do you see only the brightest parts of the cells (likely cell centers)? + Respond with <1> if the very high threshold shows only the brightest cell regions, or <0> if it failed. Only respond with <1> or <0>. + assert: + - type: contains-all + value: "<1>" + - type: not-contains + value: "<0>" + options: + cache: false + runSerially: true + +# Test 6: Optimal Threshold Selection +- vars: + question: | + Based on the previous screenshots, select what you think is the optimal iso-surface threshold for clearly visualizing the cell structures. + Apply this optimal threshold and take a screenshot. + Describe why you chose this threshold value based on the visual results. + Respond with <1> if you successfully applied an optimal threshold that clearly shows cell structures, or <0> if it failed. Only respond with <1> or <0>. + assert: + - type: contains-all + value: "<1>" + - type: not-contains + value: "<0>" + options: + cache: false + runSerially: true + +# Test 7: Iso-surface with Different Colormaps +- vars: + question: | + Keep the optimal iso-surface threshold and try different colormaps to enhance the visualization. + Test colormaps like 'viridis', 'plasma', and 'hot' on the iso-surface. + Take screenshots showing the iso-surface with different colormaps. + Respond with <1> if you successfully applied different colormaps to the iso-surface, or <0> if it failed. Only respond with <1> or <0>. + assert: + - type: contains-all + value: "<1>" + - type: not-contains + value: "<0>" + options: + cache: false + runSerially: true + +# Test 8: Camera Navigation for Iso-surface Inspection +- vars: + question: | + Rotate the camera to view the iso-surface from different angles (front, side, top). + Take screenshots from at least 3 different viewing angles. + Analyze the screenshots to see if the cell structures are visible from all angles. + Respond with <1> if you successfully inspected the iso-surface from multiple angles and could see cell structures, or <0> if it failed. Only respond with <1> or <0>. + assert: + - type: contains-all + value: "<1>" + - type: not-contains + value: "<0>" + options: + cache: false + runSerially: true + +# Test 9: Iso-surface Threshold Fine-tuning +- vars: + question: | + Fine-tune the iso-surface threshold by making small adjustments (e.g., 0.05 increments). + Test thresholds around your optimal value (e.g., optimal-0.05, optimal, optimal+0.05). + Take screenshots to compare the fine-tuned results. + Respond with <1> if you successfully fine-tuned the iso-surface threshold and could see subtle differences, or <0> if it failed. Only respond with <1> or <0>. + assert: + - type: contains-all + value: "<1>" + - type: not-contains + value: "<0>" + options: + cache: false + runSerially: true + +# Test 10: Iso-surface with Opacity Adjustment +- vars: + question: | + Adjust the opacity of the iso-surface to make it semi-transparent (e.g., 0.7). + Take a screenshot to see the semi-transparent iso-surface. + Try different opacity values (0.5, 0.8, 1.0) and take screenshots. + Respond with <1> if you successfully adjusted the iso-surface opacity and could see the transparency effects, or <0> if it failed. Only respond with <1> or <0>. + assert: + - type: contains-all + value: "<1>" + - type: not-contains + value: "<0>" + options: + cache: false + runSerially: true + +# Test 11: Cell Counting from Iso-surface +- vars: + question: | + Use the optimal iso-surface threshold to count the number of distinct cell structures visible. + Take a screenshot and analyze it to count the cells. + Add point annotations to mark the center of each counted cell. + Respond with the number of cells you can count from the iso-surface, for example "5" if you see 5 distinct cell structures. + assert: + - type: llm-rubric + value: + - It counted the visible cell structures from the iso-surface + - The count is reasonable based on the cell data + options: + cache: false + runSerially: true + +# Test 12: Iso-surface Quality Assessment +- vars: + question: | + Assess the quality of the iso-surface extraction by examining the smoothness and completeness of the cell surfaces. + Take a screenshot and describe the quality of the iso-surface (smooth, jagged, complete, incomplete). + Try adjusting the interpolation mode to 'linear' or 'cubic' to improve surface quality. + Take another screenshot to compare the surface quality. + Respond with <1> if you successfully assessed and improved the iso-surface quality, or <0> if it failed. Only respond with <1> or <0>. + assert: + - type: contains-all + value: "<1>" + - type: not-contains + value: "<0>" + options: + cache: false + runSerially: true + +# Test 13: Cleanup - Reset for next test run +- vars: + question: | + Delete the loaded layer and reset the view to 2D mode. + Remove any annotations or measurements. + Respond with <1> if the cleanup was successful, or <0> if it failed. Only respond with <1> or <0>. + assert: + - type: contains-all + value: "<1>" + - type: not-contains + value: "<0>" + options: + cache: false + runSerially: true diff --git a/napari_mcp_evals/tasks/2_scientific_tasks/eval_scene_understanding.yaml b/napari_mcp_evals/tasks/2_scientific_tasks/eval_scene_understanding.yaml new file mode 100644 index 0000000000000000000000000000000000000000..0b5aaf048972cfb1d415915bd2d55f1606aed0e8 --- /dev/null +++ b/napari_mcp_evals/tasks/2_scientific_tasks/eval_scene_understanding.yaml @@ -0,0 +1,226 @@ +# Scene Understanding Tests for napari-mcp +# These tests evaluate the agent's ability to analyze and interpret visual content from screenshots +# Focus: Visual analysis, object counting, color identification, and scene interpretation + +# Test 1: Setup - Load multi-channel cell data +- vars: + question: | + Load the first image "D:/Development/napari-mcp/eval/data/SNAP_IgM_BCR_Cell_1/input/SNAP_IgM_BCR_Cell_1_ch0_t14.tif" and then load the second image "D:/Development/napari-mcp/eval/data/SNAP_IgM_BCR_Cell_1/input/SNAP_IgM_BCR_Cell_1_ch1_t14.tif". + Set the first channel to green colormap and the second channel to magenta colormap. + Use additive blending to combine the channels. + Take a screenshot of the combined view. + Respond with <1> if the multi-channel setup was successful, or <0> if it failed. Only respond with <1> or <0>. + assert: + - type: contains-all + value: "<1>" + - type: not-contains + value: "<0>" + options: + cache: false + runSerially: true + +# Test 2: Count Green Cells +- vars: + question: | + Take a screenshot of the current view and analyze it. + Count how many green-colored cells you can see in the screenshot. + Only count cells that are completely visible (not cut off by the edges of the view). + Respond with the number of green cells you can count, for example "3" if you see 3 green cells. + assert: + - type: llm-rubric + value: + - It counted the green cells visible in the screenshot + - The count is reasonable based on the cell data + - Only counted complete cells, not partial ones + options: + cache: false + runSerially: true + +# Test 3: Count Magenta Cells +- vars: + question: | + Take a screenshot of the current view and analyze it. + Count how many magenta-colored cells you can see in the screenshot. + Only count cells that are completely visible (not cut off by the edges of the view). + Respond with the number of magenta cells you can count, for example "2" if you see 2 magenta cells. + assert: + - type: llm-rubric + value: + - It counted the magenta cells visible in the screenshot + - The count is reasonable based on the cell data + - Only counted complete cells, not partial ones + options: + cache: false + runSerially: true + +# Test 4: Count Yellow/Overlapping Cells +- vars: + question: | + Take a screenshot of the current view and analyze it. + Look for cells that appear yellow or have a mixed color (where green and magenta channels overlap). + Count how many yellow or mixed-color cells you can see. + Respond with the number of yellow/mixed-color cells you can count, for example "1" if you see 1 yellow cell. + assert: + - type: llm-rubric + value: + - It identified and counted yellow/mixed-color cells + - The count is reasonable based on the overlapping channels + - Only counted complete cells, not partial ones + options: + cache: false + runSerially: true + +# Test 5: Total Cell Count +- vars: + question: | + Take a screenshot of the current view and analyze it. + Count the total number of distinct cells you can see, regardless of color. + Include green cells, magenta cells, and yellow/mixed-color cells. + Only count cells that are completely visible (not cut off by the edges). + Respond with the total number of cells you can count, for example "6" if you see 6 total cells. + assert: + - type: llm-rubric + value: + - It counted all distinct cells regardless of color + - The total count is reasonable and consistent with previous counts + - Only counted complete cells, not partial ones + options: + cache: false + runSerially: true + +# Test 6: Cell Size Analysis +- vars: + question: | + Take a screenshot of the current view and analyze it. + Identify the largest cell and the smallest cell you can see. + Describe the relative sizes of the cells (e.g., "most cells are similar in size" or "there is one very large cell"). + Respond with a description of the cell size distribution you observe. + assert: + - type: llm-rubric + value: + - It identified size differences between cells + - The description is reasonable based on the cell data + - It provided meaningful observations about cell size distribution + options: + cache: false + runSerially: true + +# Test 7: Cell Distribution Analysis +- vars: + question: | + Take a screenshot of the current view and analyze it. + Describe how the cells are distributed across the field of view. + Are they clustered together, evenly distributed, or randomly scattered? + Respond with a description of the spatial distribution pattern you observe. + assert: + - type: llm-rubric + value: + - It analyzed the spatial distribution of cells + - The description is reasonable based on the cell data + - It provided meaningful observations about cell arrangement + options: + cache: false + runSerially: true + +# Test 8: Color Intensity Analysis +- vars: + question: | + Take a screenshot of the current view and analyze it. + Compare the intensity/brightness of the green cells versus the magenta cells. + Are the green cells brighter, dimmer, or similar in intensity to the magenta cells? + Respond with a comparison of the color intensities you observe. + assert: + - type: llm-rubric + value: + - It compared the intensity of different colored cells + - The comparison is reasonable based on the cell data + - It provided meaningful observations about color intensity + options: + cache: false + runSerially: true + +# Test 9: Cell Shape Analysis +- vars: + question: | + Take a screenshot of the current view and analyze it. + Describe the shapes of the cells you can see. + Are they round, oval, irregular, or do they have other shapes? + Respond with a description of the cell shapes you observe. + assert: + - type: llm-rubric + value: + - It analyzed the shapes of the cells + - The description is reasonable based on the cell data + - It provided meaningful observations about cell morphology + options: + cache: false + runSerially: true + +# Test 10: Scene Summary +- vars: + question: | + Take a screenshot of the current view and analyze it. + Provide a comprehensive summary of what you see in the scene. + Include: total cell count, color distribution, size distribution, spatial arrangement, and any other notable features. + Respond with a detailed summary of the scene analysis. + assert: + - type: llm-rubric + value: + - It provided a comprehensive scene summary + - The summary includes multiple aspects of the analysis + - The observations are consistent with previous individual analyses + - It demonstrated good scene understanding capabilities + options: + cache: false + runSerially: true + +# Test 11: Change View and Re-analyze +- vars: + question: | + Zoom in on the cells so they appear larger in the viewport. + Take a screenshot of the zoomed-in view. + Count the cells again in this new view and compare with your previous count. + Respond with the new cell count and whether it matches your previous count. + assert: + - type: llm-rubric + value: + - It successfully zoomed in and took a new screenshot + - It counted cells in the new view + - It compared the new count with the previous count + - The analysis is consistent with the zoom operation + options: + cache: false + runSerially: true + +# Test 12: Switch to 3D and Analyze +- vars: + question: | + Switch to 3D view mode. + Take a screenshot of the 3D view. + Analyze the 3D scene and count the cells you can see from this perspective. + Compare the 3D cell count with your previous 2D counts. + Respond with the 3D cell count and your comparison. + assert: + - type: llm-rubric + value: + - It successfully switched to 3D view and took a screenshot + - It counted cells in the 3D view + - It compared the 3D count with previous 2D counts + - The analysis is reasonable for the 3D perspective + options: + cache: false + runSerially: true + +# Test 13: Cleanup - Reset for next test run +- vars: + question: | + Delete all loaded layers and reset the view to 2D mode. + Respond with <1> if the cleanup was successful, or <0> if it failed. Only respond with <1> or <0>. + assert: + - type: contains-all + value: "<1>" + - type: not-contains + value: "<0>" + options: + cache: false + runSerially: true diff --git a/raw_to_tif.py b/raw_to_tif.py new file mode 100644 index 0000000000000000000000000000000000000000..64a22019cf1f024a7286e7cd3fa6b429a6bdc05f --- /dev/null +++ b/raw_to_tif.py @@ -0,0 +1,380 @@ +import os +import re +import numpy as np +from PIL import Image +import glob +from pathlib import Path + + +def parse_txt_file(txt_file_path): + """ + Parse the accompanying txt file to extract metadata. + Expected format: + - Name (Scalar/Vector) + - Data Scalar Type: unsigned char/unsigned short/float + - Data Byte Order: little Endian/big Endian + - Data Spacing: 1x1x1 (optional) + - Data Extent: 256x256x256 + - Number of Scalar Components: 1/3 (for vector data) + """ + txt_file_path = Path(txt_file_path) + + if not txt_file_path.exists(): + raise FileNotFoundError(f"Text file not found: {txt_file_path}") + + metadata = {} + + with open(txt_file_path, 'r') as f: + for line in f: + line = line.strip() + if not line: + continue + + if line.endswith('(Scalar)') or line.endswith('(Vector)'): + metadata['name'] = line.split(' (')[0] + metadata['data_type'] = line.split(' (')[1].rstrip(')') + elif not metadata.get('name') and not line.startswith('Description:') and not line.startswith('Data '): + # Handle case where first line is just the name without (Scalar)/(Vector) + metadata['name'] = line + metadata['data_type'] = 'Scalar' # Default assumption + elif line.startswith('Data Scalar Type:'): + scalar_type = line.split(': ')[1] + metadata['scalar_type'] = scalar_type + elif line.startswith('Data Type:'): + # Handle format like "Data Type: uint8" + dtype_str = line.split(': ')[1] + # Map numpy dtype strings to scalar type strings + dtype_to_scalar_mapping = { + 'uint8': 'unsigned char', + 'uint16': 'unsigned short', + 'uint32': 'unsigned int', + 'int8': 'char', + 'int16': 'short', + 'int32': 'int', + 'float32': 'float', + 'float64': 'double', + } + scalar_type = dtype_to_scalar_mapping.get(dtype_str, dtype_str) + metadata['scalar_type'] = scalar_type + elif line.startswith('Data Byte Order:'): + byte_order = line.split(': ')[1] + metadata['byte_order'] = byte_order + elif line.startswith('Data Spacing:'): + spacing = line.split(': ')[1] + metadata['spacing'] = spacing + elif line.startswith('Data Extent:'): + extent = line.split(': ')[1] + # Parse dimensions from extent (e.g., "256x256x256") + dimensions = [int(x) for x in extent.split('x')] + metadata['width'] = dimensions[0] + metadata['height'] = dimensions[1] + metadata['depth'] = dimensions[2] + elif line.startswith('Number of Scalar Components:'): + components = int(line.split(': ')[1]) + metadata['scalar_components'] = components + + return metadata + + +def get_numpy_dtype(scalar_type, byte_order='little Endian'): + """Convert string scalar type to numpy dtype with endianness.""" + # Map scalar types to numpy dtypes + dtype_mapping = { + 'unsigned char': np.uint8, + 'unsigned short': np.uint16, + 'unsigned int': np.uint32, + 'char': np.int8, + 'short': np.int16, + 'int': np.int32, + 'float': np.float32, + 'double': np.float64, + } + + if scalar_type not in dtype_mapping: + raise ValueError(f"Unsupported scalar type: {scalar_type}") + + base_dtype = dtype_mapping[scalar_type] + + # Handle endianness - create a dtype object first, then set byte order + if byte_order.lower() == 'little endian': + return np.dtype(base_dtype).newbyteorder('<') + elif byte_order.lower() == 'big endian': + return np.dtype(base_dtype).newbyteorder('>') + else: + # Default to little endian + return np.dtype(base_dtype).newbyteorder('<') + + +def parse_filename_fallback(filename): + """ + Parse filename to extract dimensions and channel information as fallback. + Expected format: name_widthxheightxdepth_datatype[_scalarN].raw + Examples: + - bonsai_256x256x256_uint8.raw (1 channel) + - tornado_64x64x64_float32_scalar3.raw (3 channels) + """ + # Remove .raw extension + name_without_ext = filename.replace('.raw', '') + + # Pattern to match: name_widthxheightxdepth_datatype[_scalarN] + pattern = r'(.+)_(\d+)x(\d+)x(\d+)_(.+?)(?:_scalar(\d+))?$' + match = re.match(pattern, name_without_ext) + + if not match: + raise ValueError(f"Filename {filename} doesn't match expected pattern") + + name, width, height, depth, dtype, scalar_components = match.groups() + + # Default to 1 component if not specified + if scalar_components is None: + scalar_components = 1 + else: + scalar_components = int(scalar_components) + + return { + 'name': name, + 'width': int(width), + 'height': int(height), + 'depth': int(depth), + 'dtype': dtype, + 'scalar_components': scalar_components + } + + +def convert_raw_to_tif(raw_file_path, output_dir=None): + """ + Convert a raw file to TIFF format. + + Args: + raw_file_path (str): Path to the raw file + output_dir (str): Directory to save the TIFF file. If None, saves in same directory as raw file. + + Returns: + str: Path to the created TIFF file + """ + raw_file_path = Path(raw_file_path) + + if not raw_file_path.exists(): + raise FileNotFoundError(f"Raw file not found: {raw_file_path}") + + # Find the accompanying txt file + # The txt file is named after the parent's parent folder (e.g., bonsai/data/bonsai.txt) + txt_file_path = raw_file_path.parent / f"{raw_file_path.parent.parent.name}.txt" + + # Try to parse txt file first, but fall back to filename if needed + metadata = {} + use_filename_fallback = False + + if txt_file_path.exists(): + try: + metadata = parse_txt_file(txt_file_path) + except Exception as e: + print(f"Warning: Could not parse txt file {txt_file_path}: {e}") + use_filename_fallback = True + else: + print(f"Warning: Text file not found: {txt_file_path}") + use_filename_fallback = True + + # Read raw file + with open(raw_file_path, 'rb') as f: + raw_data = f.read() + + # Get dimensions and dtype - try txt file first, fall back to filename + if use_filename_fallback: + print(f"Using filename fallback for {raw_file_path.name}") + file_info = parse_filename_fallback(raw_file_path.name) + width, height, depth = file_info['width'], file_info['height'], file_info['depth'] + scalar_components = file_info['scalar_components'] + + # Map filename dtype to scalar type for consistency + dtype_mapping = { + 'uint8': 'unsigned char', + 'uint16': 'unsigned short', + 'uint32': 'unsigned int', + 'int8': 'char', + 'int16': 'short', + 'int32': 'int', + 'float32': 'float', + 'float64': 'double', + } + scalar_type = dtype_mapping.get(file_info['dtype'], 'float') + byte_order = 'little Endian' # Default assumption + else: + width, height, depth = metadata['width'], metadata['height'], metadata['depth'] + scalar_components = metadata.get('scalar_components', 1) + scalar_type = metadata['scalar_type'] + byte_order = metadata.get('byte_order', 'little Endian') + + # Convert to numpy array with proper dtype and endianness + numpy_dtype = get_numpy_dtype(scalar_type, byte_order) + array = np.frombuffer(raw_data, dtype=numpy_dtype) + + # Calculate expected array size + expected_size = width * height * depth * scalar_components + + # Check if file size matches expected dimensions + if len(array) != expected_size: + if not use_filename_fallback: + print(f"File size mismatch with txt file dimensions. Expected {expected_size}, got {len(array)}") + print(f"Falling back to filename parsing for {raw_file_path.name}") + file_info = parse_filename_fallback(raw_file_path.name) + width, height, depth = file_info['width'], file_info['height'], file_info['depth'] + scalar_components = file_info['scalar_components'] + + # Recalculate expected size + expected_size = width * height * depth * scalar_components + + if len(array) != expected_size: + raise ValueError(f"File size doesn't match filename dimensions either. " + f"Expected {expected_size} elements, got {len(array)}") + else: + raise ValueError(f"File size doesn't match expected dimensions. " + f"Expected {expected_size} elements, got {len(array)}") + + # Reshape array based on dimensions and components + if scalar_components == 1: + # Scalar data: reshape to 3D + volume = array.reshape((depth, height, width)) + volumes = [volume] # Single volume + else: + # Vector data: reshape to 4D (depth, height, width, components) + volume_4d = array.reshape((depth, height, width, scalar_components)) + # Split into separate channels + volumes = [volume_4d[:, :, :, ch] for ch in range(scalar_components)] + + # Determine output directory + if output_dir is None: + output_dir = raw_file_path.parent + else: + output_dir = Path(output_dir) + output_dir.mkdir(parents=True, exist_ok=True) + + # Create output filenames + base_filename = raw_file_path.stem + output_paths = [] + + # Process each volume (channel) + for ch_idx, volume in enumerate(volumes): + if scalar_components > 1: + output_filename = f"{base_filename}_ch{ch_idx}.tif" + else: + output_filename = f"{base_filename}.tif" + + output_path = output_dir / output_filename + + # Convert to PIL Image and save as TIFF + # For 3D data, we'll save as a multi-page TIFF + images = [] + + for i in range(depth): + # Extract 2D slice + slice_2d = volume[i, :, :] + + # Normalize data to 0-255 range for display + if use_filename_fallback: + scalar_type = scalar_type # Already set from filename parsing + else: + scalar_type = metadata['scalar_type'] + if scalar_type == 'float': + # Normalize float data + if slice_2d.max() > slice_2d.min(): + slice_normalized = ((slice_2d - slice_2d.min()) / + (slice_2d.max() - slice_2d.min()) * 255).astype(np.uint8) + else: + slice_normalized = np.zeros_like(slice_2d, dtype=np.uint8) + elif scalar_type == 'unsigned short': + # Scale uint16 to uint8 + slice_normalized = (slice_2d / 256).astype(np.uint8) + elif scalar_type == 'unsigned char': + # uint8 data, use as is + slice_normalized = slice_2d.astype(np.uint8) + else: + # For other types, normalize to 0-255 + if slice_2d.max() > slice_2d.min(): + slice_normalized = ((slice_2d - slice_2d.min()) / + (slice_2d.max() - slice_2d.min()) * 255).astype(np.uint8) + else: + slice_normalized = np.zeros_like(slice_2d, dtype=np.uint8) + + # Convert to PIL Image + img = Image.fromarray(slice_normalized, mode='L') + images.append(img) + + # Save as multi-page TIFF + if images: + images[0].save( + output_path, + save_all=True, + append_images=images[1:], + compression='tiff_deflate' # Use deflate compression instead of LZW for better napari compatibility + ) + + output_paths.append(str(output_path)) + + # Print conversion info for this channel + if scalar_components > 1: + print(f"Converted {raw_file_path.name} -> {output_path.name} (Channel {ch_idx})") + else: + print(f"Converted {raw_file_path.name} -> {output_path.name}") + + # Print summary info + if use_filename_fallback: + print(f" Name: {file_info['name']}") + print(f" Data Type: {'Vector' if scalar_components > 1 else 'Scalar'}") + print(f" Scalar Type: {scalar_type}") + print(f" Byte Order: {byte_order}") + else: + print(f" Name: {metadata['name']}") + print(f" Data Type: {metadata['data_type']}") + print(f" Scalar Type: {metadata['scalar_type']}") + print(f" Byte Order: {metadata.get('byte_order', 'little Endian')}") + + print(f" Dimensions: {width}x{height}x{depth}") + if scalar_components > 1: + print(f" Scalar Components: {scalar_components}") + print(f" Output files: {len(output_paths)} channels") + print(f" Output: {output_paths}") + + return output_paths + + +def main(): + """ + Main function to scan a folder and convert all raw files to TIFF. + """ + # Get the current directory (SciVisAgentBench-tasks) + input_folder = r"D:\Development\SciVisAgentBench-tasks" + + print(f"Scanning directory: {input_folder}") + + # Find all raw files recursively + raw_files = list(Path(input_folder).rglob("*.raw")) + + if not raw_files: + print("No raw files found in the directory tree.") + return + + print(f"Found {len(raw_files)} raw files:") + for raw_file in raw_files: + print(f" - {raw_file}") + + print("\nStarting conversion...") + + converted_count = 0 + failed_count = 0 + + for raw_file in raw_files: + try: + convert_raw_to_tif(raw_file) + converted_count += 1 + except Exception as e: + print(f"Error converting {raw_file}: {e}") + failed_count += 1 + + print(f"\nConversion complete!") + print(f"Successfully converted: {converted_count} files") + print(f"Failed conversions: {failed_count} files") + + +if __name__ == "__main__": + main() diff --git a/sci_volume_data/aneurism/data/aneurism.txt b/sci_volume_data/aneurism/data/aneurism.txt new file mode 100644 index 0000000000000000000000000000000000000000..9d64b7a2db337785b001c8d14a503f372429cbc4 --- /dev/null +++ b/sci_volume_data/aneurism/data/aneurism.txt @@ -0,0 +1,6 @@ +Aneurism +Description: Rotational C-arm x-ray scan of the arteries of the right half of a human head. A contrast agent was injected into the blood and an aneurism is present. +Data Type: uint8 +Data Byte Order: little Endian +Data Spacing: 1x1x1 +Data Extent: 256x256x256 diff --git a/sci_volume_data/aneurism/task_description.txt b/sci_volume_data/aneurism/task_description.txt new file mode 100644 index 0000000000000000000000000000000000000000..fd7a35a55cfdba9b5258dd13ed1288a126d859c2 --- /dev/null +++ b/sci_volume_data/aneurism/task_description.txt @@ -0,0 +1,16 @@ +Task: + +Load the Aneurism dataset from "aneurism/data/aneurism_256x256x256_uint8.raw", the information about this dataset: +Aneurism +Description: Rotational C-arm x-ray scan of the arteries of the right half of a human head. A contrast agent was injected into the blood and an aneurism is present. +Data Type: uint8 +Data Byte Order: little Endian +Data Spacing: 1x1x1 +Data Extent: 256x256x256 +Data loading is very important, make sure you correctly load the dataset according to their features. + +Then visualize it and extract isosurfaces to reveal the internal structures. Create one isosurface for soft tissue (lower isovalue, color: red, opacity: 0.3) and another for bone/dense structures (higher isovalue, color: white, opacity: 0.8). + +Please think step by step and make sure to fulfill all the visualization goals mentioned above. + +Finally, save the paraview state as "aneurism/results/aneurism.pvsm" \ No newline at end of file diff --git a/sci_volume_data/aneurism/visualization_goals.txt b/sci_volume_data/aneurism/visualization_goals.txt new file mode 100644 index 0000000000000000000000000000000000000000..4bcc41de89a475c9660beb0dca095b44dfc60668 --- /dev/null +++ b/sci_volume_data/aneurism/visualization_goals.txt @@ -0,0 +1,7 @@ +1. Overall Visualization Goal: How well does the result reveal the anatomical structures in the Aneurism dataset? + +2. Does the visualization clearly distinguish between different tissue types or density regions? + +3. Are the isosurfaces positioned at appropriate values to highlight key features? + +4. Is the color scheme and opacity appropriate for medical visualization? \ No newline at end of file diff --git a/sci_volume_data/backpack/data/backpack.txt b/sci_volume_data/backpack/data/backpack.txt new file mode 100644 index 0000000000000000000000000000000000000000..cbda1eb8e22f19460197d9339029004039ebaee6 --- /dev/null +++ b/sci_volume_data/backpack/data/backpack.txt @@ -0,0 +1,6 @@ +Backpack Scan +Description: CT scan of a backpack filled with items. +Data Type: uint16 +Data Byte Order: little Endian +Data Spacing: 0.9766x0.9766x1.25 +Data Extent: 512x512x373 diff --git a/sci_volume_data/backpack/task_description.txt b/sci_volume_data/backpack/task_description.txt new file mode 100644 index 0000000000000000000000000000000000000000..a3196ab00f53a8d491ea55e4e092ebf8dec2c93d --- /dev/null +++ b/sci_volume_data/backpack/task_description.txt @@ -0,0 +1,16 @@ +Task: + +Load the Backpack Scan dataset from "backpack/data/backpack_512x512x373_uint16.raw", the information about this dataset: +Backpack Scan +Description: CT scan of a backpack filled with items. +Data Type: uint16 +Data Byte Order: little Endian +Data Spacing: 0.9766x0.9766x1.25 +Data Extent: 512x512x373 +Data loading is very important, make sure you correctly load the dataset according to their features. + +Then visualize it and extract isosurfaces to reveal the internal structures. Create one isosurface for soft tissue (lower isovalue, color: red, opacity: 0.3) and another for bone/dense structures (higher isovalue, color: white, opacity: 0.8). + +Please think step by step and make sure to fulfill all the visualization goals mentioned above. + +Finally, save the paraview state as "backpack/results/backpack.pvsm" \ No newline at end of file diff --git a/sci_volume_data/backpack/visualization_goals.txt b/sci_volume_data/backpack/visualization_goals.txt new file mode 100644 index 0000000000000000000000000000000000000000..c35ae4a9004ce864eacd32d53e919e3f21e600d3 --- /dev/null +++ b/sci_volume_data/backpack/visualization_goals.txt @@ -0,0 +1,7 @@ +1. Overall Visualization Goal: How well does the result reveal the anatomical structures in the Backpack Scan dataset? + +2. Does the visualization clearly distinguish between different tissue types or density regions? + +3. Are the isosurfaces positioned at appropriate values to highlight key features? + +4. Is the color scheme and opacity appropriate for medical visualization? \ No newline at end of file diff --git a/sci_volume_data/blunt_fin/data/blunt_fin.txt b/sci_volume_data/blunt_fin/data/blunt_fin.txt new file mode 100644 index 0000000000000000000000000000000000000000..c6e09948d838ece0965e9930dbb16dd23ad4e170 --- /dev/null +++ b/sci_volume_data/blunt_fin/data/blunt_fin.txt @@ -0,0 +1,6 @@ +Blunt Fin +Description: +Data Type: uint8 +Data Byte Order: little Endian +Data Spacing: 1x0.75x1 +Data Extent: 256x128x64 diff --git a/sci_volume_data/blunt_fin/task_description.txt b/sci_volume_data/blunt_fin/task_description.txt new file mode 100644 index 0000000000000000000000000000000000000000..2ac46c0b2223542db397791772da8e4e693e6d12 --- /dev/null +++ b/sci_volume_data/blunt_fin/task_description.txt @@ -0,0 +1,19 @@ +Task: + +Load the Blunt Fin dataset from "blunt_fin/data/blunt_fin_256x128x64_uint8.raw", the information about this dataset: +Blunt Fin +Description: +Data Type: uint8 +Data Byte Order: little Endian +Data Spacing: 1x0.75x1 +Data Extent: 256x128x64 +Data loading is very important, make sure you correctly load the dataset according to their features. + +Then visualize it using appropriate techniques: +1. Apply volume rendering with a suitable transfer function to reveal internal structures +2. Extract at least one meaningful isosurface +3. Choose appropriate colors and opacity values for clarity + +Please think step by step and make sure to fulfill all the visualization goals mentioned above. + +Finally, save the paraview state as "blunt_fin/results/blunt_fin.pvsm" \ No newline at end of file diff --git a/sci_volume_data/blunt_fin/visualization_goals.txt b/sci_volume_data/blunt_fin/visualization_goals.txt new file mode 100644 index 0000000000000000000000000000000000000000..c87638909ef073f8bff10d97109135220be41fcd --- /dev/null +++ b/sci_volume_data/blunt_fin/visualization_goals.txt @@ -0,0 +1,7 @@ +1. Overall Visualization Goal: How well does the result reveal the key features of the Blunt Fin dataset? + +2. Does the volume rendering provide good insight into the internal structure? + +3. Are the isosurfaces placed at meaningful values? + +4. Is the overall visualization clear and informative? \ No newline at end of file diff --git a/sci_volume_data/bonsai/.DS_Store b/sci_volume_data/bonsai/.DS_Store new file mode 100644 index 0000000000000000000000000000000000000000..1d791381ef90611a5ace63ad0a08663d99a9721e Binary files /dev/null and b/sci_volume_data/bonsai/.DS_Store differ diff --git a/sci_volume_data/bonsai/GS/.DS_Store b/sci_volume_data/bonsai/GS/.DS_Store new file mode 100644 index 0000000000000000000000000000000000000000..5008ddfcf53c02e82d7eee2e57c38e5672ef89f6 Binary files /dev/null and b/sci_volume_data/bonsai/GS/.DS_Store differ diff --git a/sci_volume_data/bonsai/GS/bonsai_gs.pvsm b/sci_volume_data/bonsai/GS/bonsai_gs.pvsm new file mode 100644 index 0000000000000000000000000000000000000000..ac26f52d295649eb9853ed29fb0e6e5155b249f0 --- /dev/null +++ b/sci_volume_data/bonsai/GS/bonsai_gs.pvsm @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:aa03b91b2e71ffbfef15c26dce85b93887ee4ffd1461759c11a46f640f604ac0 +size 235372 diff --git a/sci_volume_data/bonsai/GS/bonsai_gs.py b/sci_volume_data/bonsai/GS/bonsai_gs.py new file mode 100644 index 0000000000000000000000000000000000000000..077ec38b1350b01dbe70b07c9af99963765cc12e --- /dev/null +++ b/sci_volume_data/bonsai/GS/bonsai_gs.py @@ -0,0 +1,87 @@ +#!/usr/bin/env pvpython + +import os +from paraview.simple import * + +def create_bonsai_visualization(): + # — Paths & setup — + base = os.path.abspath(os.path.join(__file__, '..', '..')) + raw_file = os.path.join(base, 'data', 'bonsai_256x256x256_uint8.raw') + state_dir = os.path.join(base, 'results', 'pvpython_state') + state = os.path.join(state_dir, 'bonsai.pvsm') + os.makedirs(state_dir, exist_ok=True) + if not os.path.isfile(raw_file): + raise FileNotFoundError(f"Missing raw: {raw_file}") + + # — 1) Load the RAW image — + reader = ImageReader(FileNames=[raw_file]) + reader.DataScalarType = 'unsigned char' + reader.DataByteOrder = 'LittleEndian' + reader.DataExtent = [0, 255, 0, 255, 0, 255] + reader.DataSpacing = [1.0, 1.0, 1.0] + reader.FileDimensionality = 3 + reader.UpdatePipeline() + + # — 2) Volume render setup — + view = GetActiveViewOrCreate('RenderView') + view.BackgroundColorMode = 'Single Color' + view.Background = [1, 1, 1] + + disp = Show(reader, view) + disp.SetRepresentationType('Volume') + disp.ColorArrayName = ['POINTS', 'ImageFile'] + view.ResetCamera() + + # — 3) Transfer functions from extracted GS state — + ctf = GetColorTransferFunction('ImageFile') + ctf.ColorSpace = 'RGB' + ctf.NumberOfTableValues = 1024 + ctf.RGBPoints = [ + 0.000, 0.780, 0.522, 0.000, + 37.564, 0.847, 0.565, 0.000, + 61.402, 0.796, 0.757, 0.722, + 88.853, 0.753, 0.753, 0.753, + 118.470, 0.804, 0.737, 0.694, + 129.306, 0.686, 0.357, 0.047, + 156.756, 0.678, 0.345, 0.024, + 239.108, 0.667, 0.333, 0.000, + 255.000, 0.706, 0.016, 0.149 + ] + + otf = GetOpacityTransferFunction('ImageFile') + otf.Points = [ + 0.000, 0.000, 0.5, 0.0, + 32.507, 0.000, 0.5, 0.0, + 32.507, 0.360, 0.5, 0.0, + 39.731, 0.455, 0.5, 0.0, + 41.176, 0.000, 0.5, 0.0, + 63.569, 0.000, 0.5, 0.0, + 63.569, 0.511, 0.5, 0.0, + 89.575, 0.412, 0.5, 0.0, + 100.411, 0.000, 0.5, 0.0, + 163.980, 0.002, 0.5, 0.0, + 163.980, 0.567, 0.5, 0.0, + 231.161, 0.649, 0.5, 0.0, + 241.275, 0.433, 0.5, 0.0, + 255.000, 1.000, 0.5, 0.0 + ] + + disp.LookupTable = ctf + disp.ScalarOpacityFunction = otf + + # — 4) Camera & save — + cam = view.GetActiveCamera() + cam.SetPosition(400, 350, 450) + cam.SetFocalPoint(128, 128, 128) + cam.SetViewUp(1, 0, 0) + view.ResetCamera() + cam.Elevation(15) + cam.Azimuth(30) + cam.Zoom(1.0) + + view.StillRender() + SaveState(state) + print(f"[✔] Saved gold‑standard PVSM:\n {state}") + +if __name__ == '__main__': + create_bonsai_visualization() diff --git a/sci_volume_data/bonsai/GS/gs_diagonal_view.png b/sci_volume_data/bonsai/GS/gs_diagonal_view.png new file mode 100644 index 0000000000000000000000000000000000000000..f1d944ae5214517a458ecb2fab9ac68a379a27c0 --- /dev/null +++ b/sci_volume_data/bonsai/GS/gs_diagonal_view.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f5deab399f55098d4f79d2ca799513e730e0d80cd5954c6d4b36bcdb1553fd3a +size 589653 diff --git a/sci_volume_data/bonsai/GS/gs_front_view.png b/sci_volume_data/bonsai/GS/gs_front_view.png new file mode 100644 index 0000000000000000000000000000000000000000..f118f43bd6943bfb55c2146b8d19e96b28cdcc3c --- /dev/null +++ b/sci_volume_data/bonsai/GS/gs_front_view.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:205de6ffecfe0ac97b9a05cb4e0e48b542e05a622d8e31766fe929617ad39097 +size 350780 diff --git a/sci_volume_data/bonsai/GS/gs_side_view.png b/sci_volume_data/bonsai/GS/gs_side_view.png new file mode 100644 index 0000000000000000000000000000000000000000..1f13b925b7fce310f9a887690a832baae1c82ecb --- /dev/null +++ b/sci_volume_data/bonsai/GS/gs_side_view.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ba458a49d0e1bba5fa6267b5582e314be755a633892a18d9d61bcec781c4dbaf +size 477066 diff --git a/sci_volume_data/bonsai/data/bonsai.txt b/sci_volume_data/bonsai/data/bonsai.txt new file mode 100644 index 0000000000000000000000000000000000000000..4ef31f8c878dc6e236961ed31adf054b06567c5e --- /dev/null +++ b/sci_volume_data/bonsai/data/bonsai.txt @@ -0,0 +1,6 @@ +Bonsai +Description: CT scan of a bonsai tree. +Data Type: uint8 +Data Byte Order: little Endian +Data Spacing: 1x1x1 +Data Extent: 256x256x256 diff --git a/sci_volume_data/bonsai/task_description.txt b/sci_volume_data/bonsai/task_description.txt new file mode 100644 index 0000000000000000000000000000000000000000..8bc02ce68c9db6da2fd913d07d3a9704053cc5c1 --- /dev/null +++ b/sci_volume_data/bonsai/task_description.txt @@ -0,0 +1,16 @@ +Task: + +Load the Bonsai dataset from "bonsai/data/bonsai_256x256x256_uint8.raw", the information about this dataset: +Bonsai +Description: CT scan of a bonsai tree. +Data Type: uint8 +Data Byte Order: little Endian +Data Spacing: 1x1x1 +Data Extent: 256x256x256 +Data loading is very important, make sure you correctly load the dataset according to their features. + +Then visualize it and extract isosurfaces to reveal the internal structures. Create one isosurface for soft tissue (lower isovalue, color: red, opacity: 0.3) and another for bone/dense structures (higher isovalue, color: white, opacity: 0.8). + +Please think step by step and make sure to fulfill all the visualization goals mentioned above. + +Finally, save the paraview state as "bonsai/results/bonsai.pvsm" \ No newline at end of file diff --git a/sci_volume_data/bonsai/visualization_goals.txt b/sci_volume_data/bonsai/visualization_goals.txt new file mode 100644 index 0000000000000000000000000000000000000000..14bcb32e65230dacfcfa5f6bbc229aea8d60d10c --- /dev/null +++ b/sci_volume_data/bonsai/visualization_goals.txt @@ -0,0 +1,7 @@ +1. Overall Visualization Goal: How well does the result reveal the anatomical structures in the Bonsai dataset? + +2. Does the visualization clearly distinguish between different tissue types or density regions? + +3. Are the isosurfaces positioned at appropriate values to highlight key features? + +4. Is the color scheme and opacity appropriate for medical visualization? \ No newline at end of file diff --git a/sci_volume_data/boston_teapot/data/boston_teapot.txt b/sci_volume_data/boston_teapot/data/boston_teapot.txt new file mode 100644 index 0000000000000000000000000000000000000000..7a595ea6c0e9c37eee2a9bdac4beb3226d03678d --- /dev/null +++ b/sci_volume_data/boston_teapot/data/boston_teapot.txt @@ -0,0 +1,6 @@ +Boston Teapot +Description: CT scan of the SIGGRAPH 1989 teapot with a small version of the AVS lobster inside. +Data Type: uint8 +Data Byte Order: little Endian +Data Spacing: 1x1x1 +Data Extent: 256x256x178 diff --git a/sci_volume_data/boston_teapot/task_description.txt b/sci_volume_data/boston_teapot/task_description.txt new file mode 100644 index 0000000000000000000000000000000000000000..11055db9cd52faf6f8dfd27d9e14c34aa885d94c --- /dev/null +++ b/sci_volume_data/boston_teapot/task_description.txt @@ -0,0 +1,16 @@ +Task: + +Load the Boston Teapot dataset from "boston_teapot/data/boston_teapot_256x256x178_uint8.raw", the information about this dataset: +Boston Teapot +Description: CT scan of the SIGGRAPH 1989 teapot with a small version of the AVS lobster inside. +Data Type: uint8 +Data Byte Order: little Endian +Data Spacing: 1x1x1 +Data Extent: 256x256x178 +Data loading is very important, make sure you correctly load the dataset according to their features. + +Then visualize it and extract isosurfaces to reveal the internal structures. Create one isosurface for soft tissue (lower isovalue, color: red, opacity: 0.3) and another for bone/dense structures (higher isovalue, color: white, opacity: 0.8). + +Please think step by step and make sure to fulfill all the visualization goals mentioned above. + +Finally, save the paraview state as "boston_teapot/results/boston_teapot.pvsm" \ No newline at end of file diff --git a/sci_volume_data/boston_teapot/visualization_goals.txt b/sci_volume_data/boston_teapot/visualization_goals.txt new file mode 100644 index 0000000000000000000000000000000000000000..c1494babff4135ef314fb8df896c80133cdf7f65 --- /dev/null +++ b/sci_volume_data/boston_teapot/visualization_goals.txt @@ -0,0 +1,7 @@ +1. Overall Visualization Goal: How well does the result reveal the anatomical structures in the Boston Teapot dataset? + +2. Does the visualization clearly distinguish between different tissue types or density regions? + +3. Are the isosurfaces positioned at appropriate values to highlight key features? + +4. Is the color scheme and opacity appropriate for medical visualization? \ No newline at end of file diff --git a/sci_volume_data/bunny/GS/bunny_gs.pvsm b/sci_volume_data/bunny/GS/bunny_gs.pvsm new file mode 100644 index 0000000000000000000000000000000000000000..8982c7969c1351183084c739b9a7a2ff241defee --- /dev/null +++ b/sci_volume_data/bunny/GS/bunny_gs.pvsm @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:dc66c089856340ceebd85d5ff48f60e5ae1a57ad1fb5ef1e75f11614d38dafe0 +size 213948 diff --git a/sci_volume_data/bunny/data/bunny.txt b/sci_volume_data/bunny/data/bunny.txt new file mode 100644 index 0000000000000000000000000000000000000000..01bf14f2c154a8dd99655628b5f6917f84cd347d --- /dev/null +++ b/sci_volume_data/bunny/data/bunny.txt @@ -0,0 +1,6 @@ +Bunny +Description: A CT scan of the Stanford Bunny. The greyscale units are Hounsfield units, denoting electron-density of the subject; the scale units are in millimeters. The scan was completed 28 January 2000. +Data Type: uint16 +Data Byte Order: little Endian +Data Spacing: 0.337891x0.337891x0.5 +Data Extent: 512x512x361 diff --git a/sci_volume_data/bunny/task_description.txt b/sci_volume_data/bunny/task_description.txt new file mode 100644 index 0000000000000000000000000000000000000000..c4dfb810965c1ef3e1de5a8bcd8f55de9436a4d0 --- /dev/null +++ b/sci_volume_data/bunny/task_description.txt @@ -0,0 +1,16 @@ +Task: + +Load the Bunny dataset from "bunny/data/bunny_512x512x361_uint16.raw", the information about this dataset: +Bunny +Description: A CT scan of the Stanford Bunny. The greyscale units are Hounsfield units, denoting electron-density of the subject; the scale units are in millimeters. The scan was completed 28 January 2000. +Data Type: uint16 +Data Byte Order: little Endian +Data Spacing: 0.337891x0.337891x0.5 +Data Extent: 512x512x361 +Data loading is very important, make sure you correctly load the dataset according to their features. + +Then visualize it and extract isosurfaces to reveal the internal structures. Create one isosurface for soft tissue (lower isovalue, color: red, opacity: 0.3) and another for bone/dense structures (higher isovalue, color: white, opacity: 0.8). + +Please think step by step and make sure to fulfill all the visualization goals mentioned above. + +Finally, save the paraview state as "bunny/results/bunny.pvsm" \ No newline at end of file diff --git a/sci_volume_data/bunny/visualization_goals.txt b/sci_volume_data/bunny/visualization_goals.txt new file mode 100644 index 0000000000000000000000000000000000000000..42a5e743e92807ef69da146812c990591e7f7eeb --- /dev/null +++ b/sci_volume_data/bunny/visualization_goals.txt @@ -0,0 +1,7 @@ +1. Overall Visualization Goal: How well does the result reveal the anatomical structures in the Bunny dataset? + +2. Does the visualization clearly distinguish between different tissue types or density regions? + +3. Are the isosurfaces positioned at appropriate values to highlight key features? + +4. Is the color scheme and opacity appropriate for medical visualization? \ No newline at end of file diff --git a/sci_volume_data/carp/GS/carp_gs.pvsm b/sci_volume_data/carp/GS/carp_gs.pvsm new file mode 100644 index 0000000000000000000000000000000000000000..4c079fbbfc8752f71469aa976af86db07a6d4c2f --- /dev/null +++ b/sci_volume_data/carp/GS/carp_gs.pvsm @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9e9a7688df6a761a4e96d796eaf1afacf9bfc08c97d0897eace22139a516d1ea +size 227345 diff --git a/sci_volume_data/carp/data/carp.txt b/sci_volume_data/carp/data/carp.txt new file mode 100644 index 0000000000000000000000000000000000000000..b4b1ea152dd44d89e49fcdb79c5a9c25bc6ae624 --- /dev/null +++ b/sci_volume_data/carp/data/carp.txt @@ -0,0 +1,6 @@ +Carp +Description: CT scan of a carp fish +Data Type: uint16 +Data Byte Order: little Endian +Data Spacing: 0.78125x0.390625x1 +Data Extent: 256x256x512 diff --git a/sci_volume_data/carp/task_description.txt b/sci_volume_data/carp/task_description.txt new file mode 100644 index 0000000000000000000000000000000000000000..24a494a1e87dc7ca78ba1bf9354d73cc6ba31aa1 --- /dev/null +++ b/sci_volume_data/carp/task_description.txt @@ -0,0 +1,16 @@ +Task: + +Load the Carp dataset from "carp/data/carp_256x256x512_uint16.raw", the information about this dataset: +Carp +Description: CT scan of a carp fish +Data Type: uint16 +Data Byte Order: little Endian +Data Spacing: 0.78125x0.390625x1 +Data Extent: 256x256x512 +Data loading is very important, make sure you correctly load the dataset according to their features. + +Then visualize it and extract isosurfaces to reveal the internal structures. Create one isosurface for soft tissue (lower isovalue, color: red, opacity: 0.3) and another for bone/dense structures (higher isovalue, color: white, opacity: 0.8). + +Please think step by step and make sure to fulfill all the visualization goals mentioned above. + +Finally, save the paraview state as "carp/results/carp.pvsm" \ No newline at end of file diff --git a/sci_volume_data/carp/visualization_goals.txt b/sci_volume_data/carp/visualization_goals.txt new file mode 100644 index 0000000000000000000000000000000000000000..fb7e9fb7ff5e9af41b2a3b70b04ec5114b7ab912 --- /dev/null +++ b/sci_volume_data/carp/visualization_goals.txt @@ -0,0 +1,7 @@ +1. Overall Visualization Goal: How well does the result reveal the anatomical structures in the Carp dataset? + +2. Does the visualization clearly distinguish between different tissue types or density regions? + +3. Are the isosurfaces positioned at appropriate values to highlight key features? + +4. Is the color scheme and opacity appropriate for medical visualization? \ No newline at end of file diff --git a/sci_volume_data/christmas_tree/data/christmas_tree.txt b/sci_volume_data/christmas_tree/data/christmas_tree.txt new file mode 100644 index 0000000000000000000000000000000000000000..74537df97b34c17cda4cd5980c53b9dce2de585e --- /dev/null +++ b/sci_volume_data/christmas_tree/data/christmas_tree.txt @@ -0,0 +1,6 @@ +Christmas Tree +Description: The Christmas tree model was scanned with a Siemens Somatom Plus 4 Volume Zoom Multislice-CT scanner at the general hospital in Vienna. +Data Type: uint16 +Data Byte Order: little Endian +Data Spacing: 1x1x1 +Data Extent: 512x499x512 diff --git a/sci_volume_data/christmas_tree/task_description.txt b/sci_volume_data/christmas_tree/task_description.txt new file mode 100644 index 0000000000000000000000000000000000000000..c9e2704b3d19eead47fbc2a7b5459dbd1c0fa9db --- /dev/null +++ b/sci_volume_data/christmas_tree/task_description.txt @@ -0,0 +1,16 @@ +Task: + +Load the Christmas Tree dataset from "christmas_tree/data/christmas_tree_512x499x512_uint16.raw", the information about this dataset: +Christmas Tree +Description: The Christmas tree model was scanned with a Siemens Somatom Plus 4 Volume Zoom Multislice-CT scanner at the general hospital in Vienna. +Data Type: uint16 +Data Byte Order: little Endian +Data Spacing: 1x1x1 +Data Extent: 512x499x512 +Data loading is very important, make sure you correctly load the dataset according to their features. + +Then visualize it and extract isosurfaces to reveal the internal structures. Create one isosurface for soft tissue (lower isovalue, color: red, opacity: 0.3) and another for bone/dense structures (higher isovalue, color: white, opacity: 0.8). + +Please think step by step and make sure to fulfill all the visualization goals mentioned above. + +Finally, save the paraview state as "christmas_tree/results/christmas_tree.pvsm" \ No newline at end of file diff --git a/sci_volume_data/christmas_tree/visualization_goals.txt b/sci_volume_data/christmas_tree/visualization_goals.txt new file mode 100644 index 0000000000000000000000000000000000000000..deaf42c27890ffdfab86c1dcab1b5a9a51ea3952 --- /dev/null +++ b/sci_volume_data/christmas_tree/visualization_goals.txt @@ -0,0 +1,7 @@ +1. Overall Visualization Goal: How well does the result reveal the anatomical structures in the Christmas Tree dataset? + +2. Does the visualization clearly distinguish between different tissue types or density regions? + +3. Are the isosurfaces positioned at appropriate values to highlight key features? + +4. Is the color scheme and opacity appropriate for medical visualization? \ No newline at end of file diff --git a/sci_volume_data/csafe_heptane/data/csafe_heptane.txt b/sci_volume_data/csafe_heptane/data/csafe_heptane.txt new file mode 100644 index 0000000000000000000000000000000000000000..49f0cbc243c6701ff11fd9a562b01eabb8c1e651 --- /dev/null +++ b/sci_volume_data/csafe_heptane/data/csafe_heptane.txt @@ -0,0 +1,6 @@ +CSAFE Heptane Gas +Description: A single time step from a computational simulation of a jet of heptane gas undergoing combustion. +Data Type: uint8 +Data Byte Order: little Endian +Data Spacing: 1x1x1 +Data Extent: 302x302x302 diff --git a/sci_volume_data/csafe_heptane/task_description.txt b/sci_volume_data/csafe_heptane/task_description.txt new file mode 100644 index 0000000000000000000000000000000000000000..dd2b2c1f8f9d3bf9ac75941a7adf4b831ee67fc0 --- /dev/null +++ b/sci_volume_data/csafe_heptane/task_description.txt @@ -0,0 +1,16 @@ +Task: + +Load the CSAFE Heptane Gas dataset from "csafe_heptane/data/csafe_heptane_302x302x302_uint8.raw", the information about this dataset: +CSAFE Heptane Gas +Description: A single time step from a computational simulation of a jet of heptane gas undergoing combustion. +Data Type: uint8 +Data Byte Order: little Endian +Data Spacing: 1x1x1 +Data Extent: 302x302x302 +Data loading is very important, make sure you correctly load the dataset according to their features. + +Then visualize the flow field. Apply volume rendering with a suitable color map to show the data distribution. Add streamlines or vectors if vector data is available. + +Please think step by step and make sure to fulfill all the visualization goals mentioned above. + +Finally, save the paraview state as "csafe_heptane/results/csafe_heptane.pvsm" \ No newline at end of file diff --git a/sci_volume_data/csafe_heptane/visualization_goals.txt b/sci_volume_data/csafe_heptane/visualization_goals.txt new file mode 100644 index 0000000000000000000000000000000000000000..4cf522b2a80fd2dd7624bdb1732777d7d78dd6bf --- /dev/null +++ b/sci_volume_data/csafe_heptane/visualization_goals.txt @@ -0,0 +1,7 @@ +1. Overall Visualization Goal: How well does the result show the flow patterns or simulation dynamics in the CSAFE Heptane Gas dataset? + +2. Does the volume rendering effectively show the data distribution and gradients? + +3. Are flow features (vortices, boundaries, etc.) clearly visible? + +4. Is the color map appropriate for the physical quantity being visualized? \ No newline at end of file diff --git a/sci_volume_data/duct/data/duct.txt b/sci_volume_data/duct/data/duct.txt new file mode 100644 index 0000000000000000000000000000000000000000..1c70d9092ec2c91dc5b05d6337adfc6d0b483946 --- /dev/null +++ b/sci_volume_data/duct/data/duct.txt @@ -0,0 +1,6 @@ +Duct Flow +Description: A wall-bounded flow in a duct. +Data Type: float32 +Data Byte Order: little Endian +Data Spacing: 1x1x1 +Data Extent: 193x194x1000 diff --git a/sci_volume_data/duct/task_description.txt b/sci_volume_data/duct/task_description.txt new file mode 100644 index 0000000000000000000000000000000000000000..379a85ad8d2729c5010eeff0615d265a03336dfb --- /dev/null +++ b/sci_volume_data/duct/task_description.txt @@ -0,0 +1,16 @@ +Task: + +Load the Duct Flow dataset from "duct/data/duct_193x194x1000_float32.raw", the information about this dataset: +Duct Flow +Description: A wall-bounded flow in a duct. +Data Type: float32 +Data Byte Order: little Endian +Data Spacing: 1x1x1 +Data Extent: 193x194x1000 +Data loading is very important, make sure you correctly load the dataset according to their features. + +Then visualize it and extract isosurfaces to reveal the internal structures. Create one isosurface for soft tissue (lower isovalue, color: red, opacity: 0.3) and another for bone/dense structures (higher isovalue, color: white, opacity: 0.8). + +Please think step by step and make sure to fulfill all the visualization goals mentioned above. + +Finally, save the paraview state as "duct/results/duct.pvsm" \ No newline at end of file diff --git a/sci_volume_data/duct/visualization_goals.txt b/sci_volume_data/duct/visualization_goals.txt new file mode 100644 index 0000000000000000000000000000000000000000..4c6a0902c40d1d16311667838ae76293812a0316 --- /dev/null +++ b/sci_volume_data/duct/visualization_goals.txt @@ -0,0 +1,7 @@ +1. Overall Visualization Goal: How well does the result reveal the anatomical structures in the Duct Flow dataset? + +2. Does the visualization clearly distinguish between different tissue types or density regions? + +3. Are the isosurfaces positioned at appropriate values to highlight key features? + +4. Is the color scheme and opacity appropriate for medical visualization? \ No newline at end of file diff --git a/sci_volume_data/engine/data/engine.txt b/sci_volume_data/engine/data/engine.txt new file mode 100644 index 0000000000000000000000000000000000000000..dfbab36397f969c83158779f470b1ad23177da0e --- /dev/null +++ b/sci_volume_data/engine/data/engine.txt @@ -0,0 +1,6 @@ +Engine +Description: CT scan of two cylinders of an engine block. +Data Type: uint8 +Data Byte Order: little Endian +Data Spacing: 1x1x1 +Data Extent: 256x256x128 diff --git a/sci_volume_data/engine/task_description.txt b/sci_volume_data/engine/task_description.txt new file mode 100644 index 0000000000000000000000000000000000000000..c460c8f614789d69fb498ecacb0dd1e1baed0e20 --- /dev/null +++ b/sci_volume_data/engine/task_description.txt @@ -0,0 +1,16 @@ +Task: + +Load the Engine dataset from "engine/data/engine_256x256x128_uint8.raw", the information about this dataset: +Engine +Description: CT scan of two cylinders of an engine block. +Data Type: uint8 +Data Byte Order: little Endian +Data Spacing: 1x1x1 +Data Extent: 256x256x128 +Data loading is very important, make sure you correctly load the dataset according to their features. + +Then visualize it and extract isosurfaces to reveal the internal structures. Create one isosurface for soft tissue (lower isovalue, color: red, opacity: 0.3) and another for bone/dense structures (higher isovalue, color: white, opacity: 0.8). + +Please think step by step and make sure to fulfill all the visualization goals mentioned above. + +Finally, save the paraview state as "engine/results/engine.pvsm" \ No newline at end of file diff --git a/sci_volume_data/engine/visualization_goals.txt b/sci_volume_data/engine/visualization_goals.txt new file mode 100644 index 0000000000000000000000000000000000000000..695dee084ba4cb478a5b5736c03e54e1bfa83bf3 --- /dev/null +++ b/sci_volume_data/engine/visualization_goals.txt @@ -0,0 +1,7 @@ +1. Overall Visualization Goal: How well does the result reveal the anatomical structures in the Engine dataset? + +2. Does the visualization clearly distinguish between different tissue types or density regions? + +3. Are the isosurfaces positioned at appropriate values to highlight key features? + +4. Is the color scheme and opacity appropriate for medical visualization? \ No newline at end of file diff --git a/sci_volume_data/foot/GS/foot_gs.pvsm b/sci_volume_data/foot/GS/foot_gs.pvsm new file mode 100644 index 0000000000000000000000000000000000000000..fad94d27ba83ba6ac677a7b39b5223dcdb70c555 --- /dev/null +++ b/sci_volume_data/foot/GS/foot_gs.pvsm @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:88050ba5e71246a18bb57b1b918d85a674cdfc0f5fcc221d60c9d4a727e1885d +size 212930 diff --git a/sci_volume_data/foot/data/foot.txt b/sci_volume_data/foot/data/foot.txt new file mode 100644 index 0000000000000000000000000000000000000000..73e39fe1e35957500020ce860fea749f75ffe193 --- /dev/null +++ b/sci_volume_data/foot/data/foot.txt @@ -0,0 +1,6 @@ +Foot +Description: Rotational C-arm x-ray scan of a human foot. Tissue and bone are present in the dataset. +Data Type: uint8 +Data Byte Order: little Endian +Data Spacing: 1x1x1 +Data Extent: 256x256x256 diff --git a/sci_volume_data/foot/task_description.txt b/sci_volume_data/foot/task_description.txt new file mode 100644 index 0000000000000000000000000000000000000000..76a0a96f881308a948d6cd9debbb7da9a6bc37c8 --- /dev/null +++ b/sci_volume_data/foot/task_description.txt @@ -0,0 +1,16 @@ +Task: + +Load the Foot dataset from "foot/data/foot_256x256x256_uint8.raw", the information about this dataset: +Foot +Description: Rotational C-arm x-ray scan of a human foot. Tissue and bone are present in the dataset. +Data Type: uint8 +Data Byte Order: little Endian +Data Spacing: 1x1x1 +Data Extent: 256x256x256 +Data loading is very important, make sure you correctly load the dataset according to their features. + +Then visualize it and extract isosurfaces to reveal the internal structures. Create one isosurface for soft tissue (lower isovalue, color: red, opacity: 0.3) and another for bone/dense structures (higher isovalue, color: white, opacity: 0.8). + +Please think step by step and make sure to fulfill all the visualization goals mentioned above. + +Finally, save the paraview state as "foot/results/foot.pvsm" \ No newline at end of file diff --git a/sci_volume_data/foot/visualization_goals.txt b/sci_volume_data/foot/visualization_goals.txt new file mode 100644 index 0000000000000000000000000000000000000000..9eff7e89aabb74534582b7724e670dbead6e5070 --- /dev/null +++ b/sci_volume_data/foot/visualization_goals.txt @@ -0,0 +1,7 @@ +1. Overall Visualization Goal: How well does the result reveal the anatomical structures in the Foot dataset? + +2. Does the visualization clearly distinguish between different tissue types or density regions? + +3. Are the isosurfaces positioned at appropriate values to highlight key features? + +4. Is the color scheme and opacity appropriate for medical visualization? \ No newline at end of file diff --git a/sci_volume_data/frog/data/frog.txt b/sci_volume_data/frog/data/frog.txt new file mode 100644 index 0000000000000000000000000000000000000000..e16558a68c12dec3529fcdddfed88a2866e5b469 --- /dev/null +++ b/sci_volume_data/frog/data/frog.txt @@ -0,0 +1,6 @@ +Frog +Description: MRI scan of a frog as part of the Whole Frog Project. +Data Type: uint8 +Data Byte Order: little Endian +Data Spacing: 0.5x0.5x1 +Data Extent: 256x256x44 diff --git a/sci_volume_data/frog/task_description.txt b/sci_volume_data/frog/task_description.txt new file mode 100644 index 0000000000000000000000000000000000000000..56dc6e42b15978bc07f9c2c297e6586bbe3310c4 --- /dev/null +++ b/sci_volume_data/frog/task_description.txt @@ -0,0 +1,16 @@ +Task: + +Load the Frog dataset from "frog/data/frog_256x256x44_uint8.raw", the information about this dataset: +Frog +Description: MRI scan of a frog as part of the Whole Frog Project. +Data Type: uint8 +Data Byte Order: little Endian +Data Spacing: 0.5x0.5x1 +Data Extent: 256x256x44 +Data loading is very important, make sure you correctly load the dataset according to their features. + +Then visualize it and extract isosurfaces to reveal the internal structures. Create one isosurface for soft tissue (lower isovalue, color: red, opacity: 0.3) and another for bone/dense structures (higher isovalue, color: white, opacity: 0.8). + +Please think step by step and make sure to fulfill all the visualization goals mentioned above. + +Finally, save the paraview state as "frog/results/frog.pvsm" \ No newline at end of file diff --git a/sci_volume_data/frog/visualization_goals.txt b/sci_volume_data/frog/visualization_goals.txt new file mode 100644 index 0000000000000000000000000000000000000000..c82a4edf097f148f683f89cc594f7a84da3fdf05 --- /dev/null +++ b/sci_volume_data/frog/visualization_goals.txt @@ -0,0 +1,7 @@ +1. Overall Visualization Goal: How well does the result reveal the anatomical structures in the Frog dataset? + +2. Does the visualization clearly distinguish between different tissue types or density regions? + +3. Are the isosurfaces positioned at appropriate values to highlight key features? + +4. Is the color scheme and opacity appropriate for medical visualization? \ No newline at end of file diff --git a/sci_volume_data/fuel/data/fuel.txt b/sci_volume_data/fuel/data/fuel.txt new file mode 100644 index 0000000000000000000000000000000000000000..874a29e580f071bae3e8c8c34a763735dc4dd125 --- /dev/null +++ b/sci_volume_data/fuel/data/fuel.txt @@ -0,0 +1,6 @@ +Fuel +Description: Simulation of fuel injection into a combustion chamber. The higher the density value, the less presence of air. +Data Type: uint8 +Data Byte Order: little Endian +Data Spacing: 1x1x1 +Data Extent: 64x64x64 diff --git a/sci_volume_data/fuel/task_description.txt b/sci_volume_data/fuel/task_description.txt new file mode 100644 index 0000000000000000000000000000000000000000..7fd706fad9f413a1089bd6691a461e3eaba95be1 --- /dev/null +++ b/sci_volume_data/fuel/task_description.txt @@ -0,0 +1,16 @@ +Task: + +Load the Fuel dataset from "fuel/data/fuel_64x64x64_uint8.raw", the information about this dataset: +Fuel +Description: Simulation of fuel injection into a combustion chamber. The higher the density value, the less presence of air. +Data Type: uint8 +Data Byte Order: little Endian +Data Spacing: 1x1x1 +Data Extent: 64x64x64 +Data loading is very important, make sure you correctly load the dataset according to their features. + +Then visualize it and extract isosurfaces to reveal the internal structures. Create one isosurface for soft tissue (lower isovalue, color: red, opacity: 0.3) and another for bone/dense structures (higher isovalue, color: white, opacity: 0.8). + +Please think step by step and make sure to fulfill all the visualization goals mentioned above. + +Finally, save the paraview state as "fuel/results/fuel.pvsm" \ No newline at end of file diff --git a/sci_volume_data/fuel/visualization_goals.txt b/sci_volume_data/fuel/visualization_goals.txt new file mode 100644 index 0000000000000000000000000000000000000000..747670d2988a1110f06a73e1455b8e19e5f1a409 --- /dev/null +++ b/sci_volume_data/fuel/visualization_goals.txt @@ -0,0 +1,7 @@ +1. Overall Visualization Goal: How well does the result reveal the anatomical structures in the Fuel dataset? + +2. Does the visualization clearly distinguish between different tissue types or density regions? + +3. Are the isosurfaces positioned at appropriate values to highlight key features? + +4. Is the color scheme and opacity appropriate for medical visualization? \ No newline at end of file diff --git a/sci_volume_data/hydrogen_atom/data/hydrogen_atom.txt b/sci_volume_data/hydrogen_atom/data/hydrogen_atom.txt new file mode 100644 index 0000000000000000000000000000000000000000..8ab1c0a471b6b1d9a0eef41d2932d2b9a5fa2ced --- /dev/null +++ b/sci_volume_data/hydrogen_atom/data/hydrogen_atom.txt @@ -0,0 +1,6 @@ +Hydrogen Atom +Description: Simulation of the spatial probability distribution of the electron in an hydrogen atom, residing in a strong magnetic field. +Data Type: uint8 +Data Byte Order: little Endian +Data Spacing: 1x1x1 +Data Extent: 128x128x128 diff --git a/sci_volume_data/hydrogen_atom/task_description.txt b/sci_volume_data/hydrogen_atom/task_description.txt new file mode 100644 index 0000000000000000000000000000000000000000..f0e94535bbc03d234ef5e142d2357e9377be58ce --- /dev/null +++ b/sci_volume_data/hydrogen_atom/task_description.txt @@ -0,0 +1,16 @@ +Task: + +Load the Hydrogen Atom dataset from "hydrogen_atom/data/hydrogen_atom_128x128x128_uint8.raw", the information about this dataset: +Hydrogen Atom +Description: Simulation of the spatial probability distribution of the electron in an hydrogen atom, residing in a strong magnetic field. +Data Type: uint8 +Data Byte Order: little Endian +Data Spacing: 1x1x1 +Data Extent: 128x128x128 +Data loading is very important, make sure you correctly load the dataset according to their features. + +Then visualize it and extract isosurfaces to reveal the internal structures. Create one isosurface for soft tissue (lower isovalue, color: red, opacity: 0.3) and another for bone/dense structures (higher isovalue, color: white, opacity: 0.8). + +Please think step by step and make sure to fulfill all the visualization goals mentioned above. + +Finally, save the paraview state as "hydrogen_atom/results/hydrogen_atom.pvsm" \ No newline at end of file diff --git a/sci_volume_data/hydrogen_atom/visualization_goals.txt b/sci_volume_data/hydrogen_atom/visualization_goals.txt new file mode 100644 index 0000000000000000000000000000000000000000..a241a1fa559cf3bdb6e63bc8d4398bde9f6b192b --- /dev/null +++ b/sci_volume_data/hydrogen_atom/visualization_goals.txt @@ -0,0 +1,7 @@ +1. Overall Visualization Goal: How well does the result reveal the anatomical structures in the Hydrogen Atom dataset? + +2. Does the visualization clearly distinguish between different tissue types or density regions? + +3. Are the isosurfaces positioned at appropriate values to highlight key features? + +4. Is the color scheme and opacity appropriate for medical visualization? \ No newline at end of file diff --git a/sci_volume_data/lobster/GS/lobster_gs.pvsm b/sci_volume_data/lobster/GS/lobster_gs.pvsm new file mode 100644 index 0000000000000000000000000000000000000000..91d7f6a004da21bdb4233367fda3648437e38edf --- /dev/null +++ b/sci_volume_data/lobster/GS/lobster_gs.pvsm @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8b8b412fed393fdc742b93f3683a831c57ca4cbd21fdbd45a1d0c54930a4140c +size 227008 diff --git a/sci_volume_data/lobster/data/lobster.txt b/sci_volume_data/lobster/data/lobster.txt new file mode 100644 index 0000000000000000000000000000000000000000..a6c6fecfd29fc795bb3974fd97d3e58dd74db614 --- /dev/null +++ b/sci_volume_data/lobster/data/lobster.txt @@ -0,0 +1,6 @@ +Lobster +Description: CT scan of a lobster contained in a block of resin. +Data Type: uint8 +Data Byte Order: little Endian +Data Spacing: 1x1x1.4 +Data Extent: 301x324x56 diff --git a/sci_volume_data/lobster/task_description.txt b/sci_volume_data/lobster/task_description.txt new file mode 100644 index 0000000000000000000000000000000000000000..3d94f5451e3c286430bd31ee42461ceedb748408 --- /dev/null +++ b/sci_volume_data/lobster/task_description.txt @@ -0,0 +1,16 @@ +Task: + +Load the Lobster dataset from "lobster/data/lobster_301x324x56_uint8.raw", the information about this dataset: +Lobster +Description: CT scan of a lobster contained in a block of resin. +Data Type: uint8 +Data Byte Order: little Endian +Data Spacing: 1x1x1.4 +Data Extent: 301x324x56 +Data loading is very important, make sure you correctly load the dataset according to their features. + +Then visualize it and extract isosurfaces to reveal the internal structures. Create one isosurface for soft tissue (lower isovalue, color: red, opacity: 0.3) and another for bone/dense structures (higher isovalue, color: white, opacity: 0.8). + +Please think step by step and make sure to fulfill all the visualization goals mentioned above. + +Finally, save the paraview state as "lobster/results/lobster.pvsm" \ No newline at end of file diff --git a/sci_volume_data/lobster/visualization_goals.txt b/sci_volume_data/lobster/visualization_goals.txt new file mode 100644 index 0000000000000000000000000000000000000000..5d61b9e0874a67ad98fd489dd4ff370f73180a2e --- /dev/null +++ b/sci_volume_data/lobster/visualization_goals.txt @@ -0,0 +1,7 @@ +1. Overall Visualization Goal: How well does the result reveal the anatomical structures in the Lobster dataset? + +2. Does the visualization clearly distinguish between different tissue types or density regions? + +3. Are the isosurfaces positioned at appropriate values to highlight key features? + +4. Is the color scheme and opacity appropriate for medical visualization? \ No newline at end of file diff --git a/sci_volume_data/marmoset_neurons/data/marmoset_neurons.txt b/sci_volume_data/marmoset_neurons/data/marmoset_neurons.txt new file mode 100644 index 0000000000000000000000000000000000000000..6a88ba615c65d6bdceba2fb99b0644f83b54f349 --- /dev/null +++ b/sci_volume_data/marmoset_neurons/data/marmoset_neurons.txt @@ -0,0 +1,6 @@ +Neurons in Marmoset Visual Cortex +Description: Pyramidal neurons in the marmoset primary visual cortex (V1) labeled with green fluorescent protein (GFP) after injection of a psuedotyped G-deleted rabies virus in area V2. The tissue was cleared using the Sca/e technique and imaged on a Olympus 2-photon microscope at 20x magnification. +Data Type: uint8 +Data Byte Order: little Endian +Data Spacing: 0.497x0.497x1.5 +Data Extent: 1024x1024x314 diff --git a/sci_volume_data/marmoset_neurons/task_description.txt b/sci_volume_data/marmoset_neurons/task_description.txt new file mode 100644 index 0000000000000000000000000000000000000000..914e60c93d93a4b10eec68fbd52f9e05a61e0af9 --- /dev/null +++ b/sci_volume_data/marmoset_neurons/task_description.txt @@ -0,0 +1,16 @@ +Task: + +Load the Neurons in Marmoset Visual Cortex dataset from "marmoset_neurons/data/marmoset_neurons_1024x1024x314_uint8.raw", the information about this dataset: +Neurons in Marmoset Visual Cortex +Description: Pyramidal neurons in the marmoset primary visual cortex (V1) labeled with green fluorescent protein (GFP) after injection of a psuedotyped G-deleted rabies virus in area V2. The tissue was cleared using the Sca/e technique and imaged on a Olympus 2-photon microscope at 20x magnification. +Data Type: uint8 +Data Byte Order: little Endian +Data Spacing: 0.497x0.497x1.5 +Data Extent: 1024x1024x314 +Data loading is very important, make sure you correctly load the dataset according to their features. + +Then visualize it and extract isosurfaces to reveal the internal structures. Create one isosurface for soft tissue (lower isovalue, color: red, opacity: 0.3) and another for bone/dense structures (higher isovalue, color: white, opacity: 0.8). + +Please think step by step and make sure to fulfill all the visualization goals mentioned above. + +Finally, save the paraview state as "marmoset_neurons/results/marmoset_neurons.pvsm" \ No newline at end of file diff --git a/sci_volume_data/marmoset_neurons/visualization_goals.txt b/sci_volume_data/marmoset_neurons/visualization_goals.txt new file mode 100644 index 0000000000000000000000000000000000000000..cf5342b0c4031ff9c2339279093f56a184cb55fd --- /dev/null +++ b/sci_volume_data/marmoset_neurons/visualization_goals.txt @@ -0,0 +1,7 @@ +1. Overall Visualization Goal: How well does the result reveal the anatomical structures in the Neurons in Marmoset Visual Cortex dataset? + +2. Does the visualization clearly distinguish between different tissue types or density regions? + +3. Are the isosurfaces positioned at appropriate values to highlight key features? + +4. Is the color scheme and opacity appropriate for medical visualization? \ No newline at end of file diff --git a/sci_volume_data/marschner_lobb/data/marschner_lobb.txt b/sci_volume_data/marschner_lobb/data/marschner_lobb.txt new file mode 100644 index 0000000000000000000000000000000000000000..8347f0e8277ef6cfc192b289ac00798ee1b7f982 --- /dev/null +++ b/sci_volume_data/marschner_lobb/data/marschner_lobb.txt @@ -0,0 +1,6 @@ +Marschner-Lobb +Description: High frequencies where 99% of the sinusoids are right below the Nyquist frequency. +Data Type: uint8 +Data Byte Order: little Endian +Data Spacing: 1x1x1 +Data Extent: 41x41x41 diff --git a/sci_volume_data/marschner_lobb/task_description.txt b/sci_volume_data/marschner_lobb/task_description.txt new file mode 100644 index 0000000000000000000000000000000000000000..f6347d610d825672b227d7e023b166dd44f2689f --- /dev/null +++ b/sci_volume_data/marschner_lobb/task_description.txt @@ -0,0 +1,19 @@ +Task: + +Load the Marschner-Lobb dataset from "marschner_lobb/data/marschner_lobb_41x41x41_uint8.raw", the information about this dataset: +Marschner-Lobb +Description: High frequencies where 99% of the sinusoids are right below the Nyquist frequency. +Data Type: uint8 +Data Byte Order: little Endian +Data Spacing: 1x1x1 +Data Extent: 41x41x41 +Data loading is very important, make sure you correctly load the dataset according to their features. + +Then visualize it using appropriate techniques: +1. Apply volume rendering with a suitable transfer function to reveal internal structures +2. Extract at least one meaningful isosurface +3. Choose appropriate colors and opacity values for clarity + +Please think step by step and make sure to fulfill all the visualization goals mentioned above. + +Finally, save the paraview state as "marschner_lobb/results/marschner_lobb.pvsm" \ No newline at end of file diff --git a/sci_volume_data/marschner_lobb/visualization_goals.txt b/sci_volume_data/marschner_lobb/visualization_goals.txt new file mode 100644 index 0000000000000000000000000000000000000000..6b30ed745d83e1b68b46296fcf9349cb0cd0824c --- /dev/null +++ b/sci_volume_data/marschner_lobb/visualization_goals.txt @@ -0,0 +1,7 @@ +1. Overall Visualization Goal: How well does the result reveal the key features of the Marschner-Lobb dataset? + +2. Does the volume rendering provide good insight into the internal structure? + +3. Are the isosurfaces placed at meaningful values? + +4. Is the overall visualization clear and informative? \ No newline at end of file diff --git a/sci_volume_data/mri_ventricles/data/mri_ventricles.txt b/sci_volume_data/mri_ventricles/data/mri_ventricles.txt new file mode 100644 index 0000000000000000000000000000000000000000..44f2cf0d81136d79b425d26f0b9d1096d2dde2c3 --- /dev/null +++ b/sci_volume_data/mri_ventricles/data/mri_ventricles.txt @@ -0,0 +1,6 @@ +Head MRI CISS +Description: 1.5T MRT 3D CISS dataset of a human head that highlights the CSF (Cerebro-Spinal-Fluid) filled cavities of the head. +Data Type: uint8 +Data Byte Order: little Endian +Data Spacing: 0.9x0.9x0.9 +Data Extent: 256x256x124 diff --git a/sci_volume_data/mri_ventricles/task_description.txt b/sci_volume_data/mri_ventricles/task_description.txt new file mode 100644 index 0000000000000000000000000000000000000000..af22729a70523faa6395507febc9e8da712365eb --- /dev/null +++ b/sci_volume_data/mri_ventricles/task_description.txt @@ -0,0 +1,16 @@ +Task: + +Load the Head MRI CISS dataset from "mri_ventricles/data/mri_ventricles_256x256x124_uint8.raw", the information about this dataset: +Head MRI CISS +Description: 1.5T MRT 3D CISS dataset of a human head that highlights the CSF (Cerebro-Spinal-Fluid) filled cavities of the head. +Data Type: uint8 +Data Byte Order: little Endian +Data Spacing: 0.9x0.9x0.9 +Data Extent: 256x256x124 +Data loading is very important, make sure you correctly load the dataset according to their features. + +Then visualize the flow field. Apply volume rendering with a suitable color map to show the data distribution. Add streamlines or vectors if vector data is available. + +Please think step by step and make sure to fulfill all the visualization goals mentioned above. + +Finally, save the paraview state as "mri_ventricles/results/mri_ventricles.pvsm" \ No newline at end of file diff --git a/sci_volume_data/mri_ventricles/visualization_goals.txt b/sci_volume_data/mri_ventricles/visualization_goals.txt new file mode 100644 index 0000000000000000000000000000000000000000..7f59ff12b1edd8719d0d8f780783deba14ed1d9f --- /dev/null +++ b/sci_volume_data/mri_ventricles/visualization_goals.txt @@ -0,0 +1,7 @@ +1. Overall Visualization Goal: How well does the result show the flow patterns or simulation dynamics in the Head MRI CISS dataset? + +2. Does the volume rendering effectively show the data distribution and gradients? + +3. Are flow features (vortices, boundaries, etc.) clearly visible? + +4. Is the color map appropriate for the physical quantity being visualized? \ No newline at end of file diff --git a/sci_volume_data/mri_woman/data/mri_woman.txt b/sci_volume_data/mri_woman/data/mri_woman.txt new file mode 100644 index 0000000000000000000000000000000000000000..efcd7a9afcaca4b2229737b67860297a98411f3e --- /dev/null +++ b/sci_volume_data/mri_woman/data/mri_woman.txt @@ -0,0 +1,6 @@ +MRI Woman +Description: MRI scan of a woman's head +Data Type: uint16 +Data Byte Order: little Endian +Data Spacing: 1x1x1.5 +Data Extent: 256x256x109 diff --git a/sci_volume_data/mri_woman/task_description.txt b/sci_volume_data/mri_woman/task_description.txt new file mode 100644 index 0000000000000000000000000000000000000000..013397462455eda4d818ee4f09a318d519b317a1 --- /dev/null +++ b/sci_volume_data/mri_woman/task_description.txt @@ -0,0 +1,16 @@ +Task: + +Load the MRI Woman dataset from "mri_woman/data/mri_woman_256x256x109_uint16.raw", the information about this dataset: +MRI Woman +Description: MRI scan of a woman's head +Data Type: uint16 +Data Byte Order: little Endian +Data Spacing: 1x1x1.5 +Data Extent: 256x256x109 +Data loading is very important, make sure you correctly load the dataset according to their features. + +Then visualize it and extract isosurfaces to reveal the internal structures. Create one isosurface for soft tissue (lower isovalue, color: red, opacity: 0.3) and another for bone/dense structures (higher isovalue, color: white, opacity: 0.8). + +Please think step by step and make sure to fulfill all the visualization goals mentioned above. + +Finally, save the paraview state as "mri_woman/results/mri_woman.pvsm" \ No newline at end of file diff --git a/sci_volume_data/mri_woman/visualization_goals.txt b/sci_volume_data/mri_woman/visualization_goals.txt new file mode 100644 index 0000000000000000000000000000000000000000..4224403a468db58cfe77ea970055bc6f162ad9dc --- /dev/null +++ b/sci_volume_data/mri_woman/visualization_goals.txt @@ -0,0 +1,7 @@ +1. Overall Visualization Goal: How well does the result reveal the anatomical structures in the MRI Woman dataset? + +2. Does the visualization clearly distinguish between different tissue types or density regions? + +3. Are the isosurfaces positioned at appropriate values to highlight key features? + +4. Is the color scheme and opacity appropriate for medical visualization? \ No newline at end of file diff --git a/sci_volume_data/mrt_angio/data/mrt_angio.txt b/sci_volume_data/mrt_angio/data/mrt_angio.txt new file mode 100644 index 0000000000000000000000000000000000000000..e4a3d3370c0275a463efd730271cbfc411ab3c3e --- /dev/null +++ b/sci_volume_data/mrt_angio/data/mrt_angio.txt @@ -0,0 +1,6 @@ +Head MRT Angiography +Description: 3T MRT Time-of-Flight Angiography dataset of a human head. The dataset has been resampled into an isotropic voxel grid (hence the peculiar slice size). +Data Type: uint16 +Data Byte Order: little Endian +Data Spacing: 0.412x0.412x0.412 +Data Extent: 416x512x112 diff --git a/sci_volume_data/mrt_angio/task_description.txt b/sci_volume_data/mrt_angio/task_description.txt new file mode 100644 index 0000000000000000000000000000000000000000..68bdc8fbd3a51c6fe3501dedd85143adb047f87d --- /dev/null +++ b/sci_volume_data/mrt_angio/task_description.txt @@ -0,0 +1,19 @@ +Task: + +Load the Head MRT Angiography dataset from "mrt_angio/data/mrt_angio_416x512x112_uint16.raw", the information about this dataset: +Head MRT Angiography +Description: 3T MRT Time-of-Flight Angiography dataset of a human head. The dataset has been resampled into an isotropic voxel grid (hence the peculiar slice size). +Data Type: uint16 +Data Byte Order: little Endian +Data Spacing: 0.412x0.412x0.412 +Data Extent: 416x512x112 +Data loading is very important, make sure you correctly load the dataset according to their features. + +Then visualize it using appropriate techniques: +1. Apply volume rendering with a suitable transfer function to reveal internal structures +2. Extract at least one meaningful isosurface +3. Choose appropriate colors and opacity values for clarity + +Please think step by step and make sure to fulfill all the visualization goals mentioned above. + +Finally, save the paraview state as "mrt_angio/results/mrt_angio.pvsm" \ No newline at end of file diff --git a/sci_volume_data/mrt_angio/visualization_goals.txt b/sci_volume_data/mrt_angio/visualization_goals.txt new file mode 100644 index 0000000000000000000000000000000000000000..5e48ca05de8af4f04f065e3622973731cbe0a2d6 --- /dev/null +++ b/sci_volume_data/mrt_angio/visualization_goals.txt @@ -0,0 +1,7 @@ +1. Overall Visualization Goal: How well does the result reveal the key features of the Head MRT Angiography dataset? + +2. Does the volume rendering provide good insight into the internal structure? + +3. Are the isosurfaces placed at meaningful values? + +4. Is the overall visualization clear and informative? \ No newline at end of file diff --git a/sci_volume_data/neghip/data/neghip.txt b/sci_volume_data/neghip/data/neghip.txt new file mode 100644 index 0000000000000000000000000000000000000000..3d199ca4994235ec3f2b7263a8da7c871ee6b799 --- /dev/null +++ b/sci_volume_data/neghip/data/neghip.txt @@ -0,0 +1,6 @@ +Neghip +Description: Simulation of the spatial probability distribution of the electrons in a high potential protein molecule. +Data Type: uint8 +Data Byte Order: little Endian +Data Spacing: 1x1x1 +Data Extent: 64x64x64 diff --git a/sci_volume_data/neghip/task_description.txt b/sci_volume_data/neghip/task_description.txt new file mode 100644 index 0000000000000000000000000000000000000000..5880ccc6c9b320a768a1c8bd9b6c88d58901cdfd --- /dev/null +++ b/sci_volume_data/neghip/task_description.txt @@ -0,0 +1,16 @@ +Task: + +Load the Neghip dataset from "neghip/data/neghip_64x64x64_uint8.raw", the information about this dataset: +Neghip +Description: Simulation of the spatial probability distribution of the electrons in a high potential protein molecule. +Data Type: uint8 +Data Byte Order: little Endian +Data Spacing: 1x1x1 +Data Extent: 64x64x64 +Data loading is very important, make sure you correctly load the dataset according to their features. + +Then visualize it and extract isosurfaces to reveal the internal structures. Create one isosurface for soft tissue (lower isovalue, color: red, opacity: 0.3) and another for bone/dense structures (higher isovalue, color: white, opacity: 0.8). + +Please think step by step and make sure to fulfill all the visualization goals mentioned above. + +Finally, save the paraview state as "neghip/results/neghip.pvsm" \ No newline at end of file diff --git a/sci_volume_data/neghip/visualization_goals.txt b/sci_volume_data/neghip/visualization_goals.txt new file mode 100644 index 0000000000000000000000000000000000000000..074195efde3fdd93fab03dbc9fa1fee594c7a54c --- /dev/null +++ b/sci_volume_data/neghip/visualization_goals.txt @@ -0,0 +1,7 @@ +1. Overall Visualization Goal: How well does the result reveal the anatomical structures in the Neghip dataset? + +2. Does the visualization clearly distinguish between different tissue types or density regions? + +3. Are the isosurfaces positioned at appropriate values to highlight key features? + +4. Is the color scheme and opacity appropriate for medical visualization? \ No newline at end of file diff --git a/sci_volume_data/neocortical_layer_1_axons/data/neocortical_layer_1_axons.txt b/sci_volume_data/neocortical_layer_1_axons/data/neocortical_layer_1_axons.txt new file mode 100644 index 0000000000000000000000000000000000000000..fd1f3d09715218488004cd47db4ba85f6948563b --- /dev/null +++ b/sci_volume_data/neocortical_layer_1_axons/data/neocortical_layer_1_axons.txt @@ -0,0 +1,6 @@ +Neocortical Layer 1 Axons +Description: Axons in layer 1 of the mouse barrel cortex imaged in vivo. +Data Type: uint8 +Data Byte Order: little Endian +Data Spacing: 1x1x3.4 +Data Extent: 1464x1033x76 diff --git a/sci_volume_data/neocortical_layer_1_axons/task_description.txt b/sci_volume_data/neocortical_layer_1_axons/task_description.txt new file mode 100644 index 0000000000000000000000000000000000000000..f3e2834258318d9b82439efb0a5a4b52e5df7a6d --- /dev/null +++ b/sci_volume_data/neocortical_layer_1_axons/task_description.txt @@ -0,0 +1,19 @@ +Task: + +Load the Neocortical Layer 1 Axons dataset from "neocortical_layer_1_axons/data/neocortical_layer_1_axons_1464x1033x76_uint8.raw", the information about this dataset: +Neocortical Layer 1 Axons +Description: Axons in layer 1 of the mouse barrel cortex imaged in vivo. +Data Type: uint8 +Data Byte Order: little Endian +Data Spacing: 1x1x3.4 +Data Extent: 1464x1033x76 +Data loading is very important, make sure you correctly load the dataset according to their features. + +Then visualize it using appropriate techniques: +1. Apply volume rendering with a suitable transfer function to reveal internal structures +2. Extract at least one meaningful isosurface +3. Choose appropriate colors and opacity values for clarity + +Please think step by step and make sure to fulfill all the visualization goals mentioned above. + +Finally, save the paraview state as "neocortical_layer_1_axons/results/neocortical_layer_1_axons.pvsm" \ No newline at end of file diff --git a/sci_volume_data/neocortical_layer_1_axons/visualization_goals.txt b/sci_volume_data/neocortical_layer_1_axons/visualization_goals.txt new file mode 100644 index 0000000000000000000000000000000000000000..ae2dc6438e7ce44a93151aeed2bc292128152fad --- /dev/null +++ b/sci_volume_data/neocortical_layer_1_axons/visualization_goals.txt @@ -0,0 +1,7 @@ +1. Overall Visualization Goal: How well does the result reveal the key features of the Neocortical Layer 1 Axons dataset? + +2. Does the volume rendering provide good insight into the internal structure? + +3. Are the isosurfaces placed at meaningful values? + +4. Is the overall visualization clear and informative? \ No newline at end of file diff --git a/sci_volume_data/nucleon/data/nucleon.txt b/sci_volume_data/nucleon/data/nucleon.txt new file mode 100644 index 0000000000000000000000000000000000000000..a6609f024cb70dcecfcda48299462b7f1dbb712c --- /dev/null +++ b/sci_volume_data/nucleon/data/nucleon.txt @@ -0,0 +1,6 @@ +Nucleon +Description: Simulation of the two-body distribution probability of a nucleon in the atomic nucleus 16O if a second nucleon is known to be positioned at r'=(2 fm,0,0). +Data Type: uint8 +Data Byte Order: little Endian +Data Spacing: 1x1x1 +Data Extent: 41x41x41 diff --git a/sci_volume_data/nucleon/task_description.txt b/sci_volume_data/nucleon/task_description.txt new file mode 100644 index 0000000000000000000000000000000000000000..60b92deb79a90317d1a3cc6a013e9b576688ec7b --- /dev/null +++ b/sci_volume_data/nucleon/task_description.txt @@ -0,0 +1,16 @@ +Task: + +Load the Nucleon dataset from "nucleon/data/nucleon_41x41x41_uint8.raw", the information about this dataset: +Nucleon +Description: Simulation of the two-body distribution probability of a nucleon in the atomic nucleus 16O if a second nucleon is known to be positioned at r'=(2 fm,0,0). +Data Type: uint8 +Data Byte Order: little Endian +Data Spacing: 1x1x1 +Data Extent: 41x41x41 +Data loading is very important, make sure you correctly load the dataset according to their features. + +Then visualize the flow field. Apply volume rendering with a suitable color map to show the data distribution. Add streamlines or vectors if vector data is available. + +Please think step by step and make sure to fulfill all the visualization goals mentioned above. + +Finally, save the paraview state as "nucleon/results/nucleon.pvsm" \ No newline at end of file diff --git a/sci_volume_data/nucleon/visualization_goals.txt b/sci_volume_data/nucleon/visualization_goals.txt new file mode 100644 index 0000000000000000000000000000000000000000..02f91f978d7fd4d764f6484ed2822db3f38c83da --- /dev/null +++ b/sci_volume_data/nucleon/visualization_goals.txt @@ -0,0 +1,7 @@ +1. Overall Visualization Goal: How well does the result show the flow patterns or simulation dynamics in the Nucleon dataset? + +2. Does the volume rendering effectively show the data distribution and gradients? + +3. Are flow features (vortices, boundaries, etc.) clearly visible? + +4. Is the color map appropriate for the physical quantity being visualized? \ No newline at end of file diff --git a/sci_volume_data/pancreas/data/pancreas.txt b/sci_volume_data/pancreas/data/pancreas.txt new file mode 100644 index 0000000000000000000000000000000000000000..8cdca818ca990681a5ff1d349cd0ffcbf84ced60 --- /dev/null +++ b/sci_volume_data/pancreas/data/pancreas.txt @@ -0,0 +1,6 @@ +Pancreas +Description: First scan. The National Institutes of Health Clinical Center performed 82 abdominal contrast enhanced 3D CT scans (~70 seconds after intravenous contrast injection in portal-venous) from 53 male and 27 female subjects. Seventeen of the subjects are healthy kidney donors scanned prior to nephrectomy. The remaining 65 patients were selected by a radiologist from patients who neither had major abdominal pathologies nor pancreatic cancer lesions. Subjects' ages range from 18 to 76 years with a mean age of 46.8 ± 16.7. The CT scans have resolutions of 512x512 pixels with varying pixel sizes and slice thickness between 1.5 - 2.5 mm, acquired on Philips and Siemens MDCT scanners (120 kVp tube voltage). A medical student manually performed slice-by-slice segmentations of the pancreas as ground-truth and these were verified/modified by an experienced radiologist. +Data Type: int16 +Data Byte Order: little Endian +Data Spacing: 1.16x1.0x1.0 +Data Extent: 240x512x512 diff --git a/sci_volume_data/pancreas/task_description.txt b/sci_volume_data/pancreas/task_description.txt new file mode 100644 index 0000000000000000000000000000000000000000..1417310a9a0fda4923c1d349a4f9e3ad98bc28c8 --- /dev/null +++ b/sci_volume_data/pancreas/task_description.txt @@ -0,0 +1,16 @@ +Task: + +Load the Pancreas dataset from "pancreas/data/pancreas_240x512x512_int16.raw", the information about this dataset: +Pancreas +Description: First scan. The National Institutes of Health Clinical Center performed 82 abdominal contrast enhanced 3D CT scans (~70 seconds after intravenous contrast injection in portal-venous) from 53 male and 27 female subjects. Seventeen of the subjects are healthy kidney donors scanned prior to nephrectomy. The remaining 65 patients were selected by a radiologist from patients who neither had major abdominal pathologies nor pancreatic cancer lesions. Subjects' ages range from 18 to 76 years with a mean age of 46.8 ± 16.7. The CT scans have resolutions of 512x512 pixels with varying pixel sizes and slice thickness between 1.5 - 2.5 mm, acquired on Philips and Siemens MDCT scanners (120 kVp tube voltage). A medical student manually performed slice-by-slice segmentations of the pancreas as ground-truth and these were verified/modified by an experienced radiologist. +Data Type: int16 +Data Byte Order: little Endian +Data Spacing: 1.16x1.0x1.0 +Data Extent: 240x512x512 +Data loading is very important, make sure you correctly load the dataset according to their features. + +Then visualize it and extract isosurfaces to reveal the internal structures. Create one isosurface for soft tissue (lower isovalue, color: red, opacity: 0.3) and another for bone/dense structures (higher isovalue, color: white, opacity: 0.8). + +Please think step by step and make sure to fulfill all the visualization goals mentioned above. + +Finally, save the paraview state as "pancreas/results/pancreas.pvsm" \ No newline at end of file diff --git a/sci_volume_data/pancreas/visualization_goals.txt b/sci_volume_data/pancreas/visualization_goals.txt new file mode 100644 index 0000000000000000000000000000000000000000..e7f73abf580ccc5c27effb168cfac23cb60f2b5b --- /dev/null +++ b/sci_volume_data/pancreas/visualization_goals.txt @@ -0,0 +1,7 @@ +1. Overall Visualization Goal: How well does the result reveal the anatomical structures in the Pancreas dataset? + +2. Does the visualization clearly distinguish between different tissue types or density regions? + +3. Are the isosurfaces positioned at appropriate values to highlight key features? + +4. Is the color scheme and opacity appropriate for medical visualization? \ No newline at end of file diff --git a/sci_volume_data/present/data/present.txt b/sci_volume_data/present/data/present.txt new file mode 100644 index 0000000000000000000000000000000000000000..357dcb1512e64381dfe1f15a1673cfd1a1c98b20 --- /dev/null +++ b/sci_volume_data/present/data/present.txt @@ -0,0 +1,6 @@ +Christmas Present +Description: An industrial CT scan of a christmas present. +Data Type: uint16 +Data Byte Order: little Endian +Data Spacing: 1x1x1 +Data Extent: 492x492x442 diff --git a/sci_volume_data/present/task_description.txt b/sci_volume_data/present/task_description.txt new file mode 100644 index 0000000000000000000000000000000000000000..2da0555d8d9c31ff8da4848ecdda7d93715ac63a --- /dev/null +++ b/sci_volume_data/present/task_description.txt @@ -0,0 +1,16 @@ +Task: + +Load the Christmas Present dataset from "present/data/present_492x492x442_uint16.raw", the information about this dataset: +Christmas Present +Description: An industrial CT scan of a christmas present. +Data Type: uint16 +Data Byte Order: little Endian +Data Spacing: 1x1x1 +Data Extent: 492x492x442 +Data loading is very important, make sure you correctly load the dataset according to their features. + +Then visualize it and extract isosurfaces to reveal the internal structures. Create one isosurface for soft tissue (lower isovalue, color: red, opacity: 0.3) and another for bone/dense structures (higher isovalue, color: white, opacity: 0.8). + +Please think step by step and make sure to fulfill all the visualization goals mentioned above. + +Finally, save the paraview state as "present/results/present.pvsm" \ No newline at end of file diff --git a/sci_volume_data/present/visualization_goals.txt b/sci_volume_data/present/visualization_goals.txt new file mode 100644 index 0000000000000000000000000000000000000000..f37c1bc1894df8cc341eab6a648256c8d7c6a1a2 --- /dev/null +++ b/sci_volume_data/present/visualization_goals.txt @@ -0,0 +1,7 @@ +1. Overall Visualization Goal: How well does the result reveal the anatomical structures in the Christmas Present dataset? + +2. Does the visualization clearly distinguish between different tissue types or density regions? + +3. Are the isosurfaces positioned at appropriate values to highlight key features? + +4. Is the color scheme and opacity appropriate for medical visualization? \ No newline at end of file diff --git a/sci_volume_data/prone/data/prone.txt b/sci_volume_data/prone/data/prone.txt new file mode 100644 index 0000000000000000000000000000000000000000..ad62135f4a58efecab1b32fdeb913631c840e8d7 --- /dev/null +++ b/sci_volume_data/prone/data/prone.txt @@ -0,0 +1,6 @@ +Colon Prone +Description: CT scan of abdomen in prone orientation (back faces ceiling, belly faces table). +Data Type: uint16 +Data Byte Order: little Endian +Data Spacing: 0.625x0.625x1.0 +Data Extent: 512x512x463 diff --git a/sci_volume_data/prone/task_description.txt b/sci_volume_data/prone/task_description.txt new file mode 100644 index 0000000000000000000000000000000000000000..b12ec409702f7c3c117fbcf1cdb37bee0a0944f9 --- /dev/null +++ b/sci_volume_data/prone/task_description.txt @@ -0,0 +1,16 @@ +Task: + +Load the Colon Prone dataset from "prone/data/prone_512x512x463_uint16.raw", the information about this dataset: +Colon Prone +Description: CT scan of abdomen in prone orientation (back faces ceiling, belly faces table). +Data Type: uint16 +Data Byte Order: little Endian +Data Spacing: 0.625x0.625x1.0 +Data Extent: 512x512x463 +Data loading is very important, make sure you correctly load the dataset according to their features. + +Then visualize it and extract isosurfaces to reveal the internal structures. Create one isosurface for soft tissue (lower isovalue, color: red, opacity: 0.3) and another for bone/dense structures (higher isovalue, color: white, opacity: 0.8). + +Please think step by step and make sure to fulfill all the visualization goals mentioned above. + +Finally, save the paraview state as "prone/results/prone.pvsm" \ No newline at end of file diff --git a/sci_volume_data/prone/visualization_goals.txt b/sci_volume_data/prone/visualization_goals.txt new file mode 100644 index 0000000000000000000000000000000000000000..b41b93b43159b7078fe5a41700da39bb7aafce39 --- /dev/null +++ b/sci_volume_data/prone/visualization_goals.txt @@ -0,0 +1,7 @@ +1. Overall Visualization Goal: How well does the result reveal the anatomical structures in the Colon Prone dataset? + +2. Does the visualization clearly distinguish between different tissue types or density regions? + +3. Are the isosurfaces positioned at appropriate values to highlight key features? + +4. Is the color scheme and opacity appropriate for medical visualization? \ No newline at end of file diff --git a/sci_volume_data/shockwave/data/shockwave.txt b/sci_volume_data/shockwave/data/shockwave.txt new file mode 100644 index 0000000000000000000000000000000000000000..f26af8c469750101a8689b0ddc44a69dd1c1441d --- /dev/null +++ b/sci_volume_data/shockwave/data/shockwave.txt @@ -0,0 +1,6 @@ +Shockwave +Description: Simulation of an unsteady interaction of a planar shockwave with a randomly-perturbed contact discontinuity. +Data Type: uint8 +Data Byte Order: little Endian +Data Spacing: 1x1x1 +Data Extent: 64x64x512 diff --git a/sci_volume_data/shockwave/task_description.txt b/sci_volume_data/shockwave/task_description.txt new file mode 100644 index 0000000000000000000000000000000000000000..a46545bd13655c07b527ba858594b116e8697f0d --- /dev/null +++ b/sci_volume_data/shockwave/task_description.txt @@ -0,0 +1,16 @@ +Task: + +Load the Shockwave dataset from "shockwave/data/shockwave_64x64x512_uint8.raw", the information about this dataset: +Shockwave +Description: Simulation of an unsteady interaction of a planar shockwave with a randomly-perturbed contact discontinuity. +Data Type: uint8 +Data Byte Order: little Endian +Data Spacing: 1x1x1 +Data Extent: 64x64x512 +Data loading is very important, make sure you correctly load the dataset according to their features. + +Then visualize it and extract isosurfaces to reveal the internal structures. Create one isosurface for soft tissue (lower isovalue, color: red, opacity: 0.3) and another for bone/dense structures (higher isovalue, color: white, opacity: 0.8). + +Please think step by step and make sure to fulfill all the visualization goals mentioned above. + +Finally, save the paraview state as "shockwave/results/shockwave.pvsm" \ No newline at end of file diff --git a/sci_volume_data/shockwave/visualization_goals.txt b/sci_volume_data/shockwave/visualization_goals.txt new file mode 100644 index 0000000000000000000000000000000000000000..62267ccb150ce62a7a6b99d9fc1d465a4e394ee8 --- /dev/null +++ b/sci_volume_data/shockwave/visualization_goals.txt @@ -0,0 +1,7 @@ +1. Overall Visualization Goal: How well does the result reveal the anatomical structures in the Shockwave dataset? + +2. Does the visualization clearly distinguish between different tissue types or density regions? + +3. Are the isosurfaces positioned at appropriate values to highlight key features? + +4. Is the color scheme and opacity appropriate for medical visualization? \ No newline at end of file diff --git a/sci_volume_data/silicium/data/silicium.txt b/sci_volume_data/silicium/data/silicium.txt new file mode 100644 index 0000000000000000000000000000000000000000..775cea1ec7f175287c2d60d5a050f16e0a514d47 --- /dev/null +++ b/sci_volume_data/silicium/data/silicium.txt @@ -0,0 +1,6 @@ +Silicium +Description: Simulation of a silicium grid. +Data Type: uint8 +Data Byte Order: little Endian +Data Spacing: 1x1x1 +Data Extent: 98x34x34 diff --git a/sci_volume_data/silicium/task_description.txt b/sci_volume_data/silicium/task_description.txt new file mode 100644 index 0000000000000000000000000000000000000000..d821af2480ed64b0cc1abd3020aa0a1b7586dd93 --- /dev/null +++ b/sci_volume_data/silicium/task_description.txt @@ -0,0 +1,16 @@ +Task: + +Load the Silicium dataset from "silicium/data/silicium_98x34x34_uint8.raw", the information about this dataset: +Silicium +Description: Simulation of a silicium grid. +Data Type: uint8 +Data Byte Order: little Endian +Data Spacing: 1x1x1 +Data Extent: 98x34x34 +Data loading is very important, make sure you correctly load the dataset according to their features. + +Then visualize the flow field. Apply volume rendering with a suitable color map to show the data distribution. Add streamlines or vectors if vector data is available. + +Please think step by step and make sure to fulfill all the visualization goals mentioned above. + +Finally, save the paraview state as "silicium/results/silicium.pvsm" \ No newline at end of file diff --git a/sci_volume_data/silicium/visualization_goals.txt b/sci_volume_data/silicium/visualization_goals.txt new file mode 100644 index 0000000000000000000000000000000000000000..8219bd260cb062b6042254b95841312f3d57f1a5 --- /dev/null +++ b/sci_volume_data/silicium/visualization_goals.txt @@ -0,0 +1,7 @@ +1. Overall Visualization Goal: How well does the result show the flow patterns or simulation dynamics in the Silicium dataset? + +2. Does the volume rendering effectively show the data distribution and gradients? + +3. Are flow features (vortices, boundaries, etc.) clearly visible? + +4. Is the color map appropriate for the physical quantity being visualized? \ No newline at end of file diff --git a/sci_volume_data/skull/data/skull.txt b/sci_volume_data/skull/data/skull.txt new file mode 100644 index 0000000000000000000000000000000000000000..974a93e3f2534c4a9d51c885e8e70d84b8d841d3 --- /dev/null +++ b/sci_volume_data/skull/data/skull.txt @@ -0,0 +1,6 @@ +Skull +Description: Rotational C-arm x-ray scan of phantom of a human skull. +Data Type: uint8 +Data Byte Order: little Endian +Data Spacing: 1x1x1 +Data Extent: 256x256x256 diff --git a/sci_volume_data/skull/task_description.txt b/sci_volume_data/skull/task_description.txt new file mode 100644 index 0000000000000000000000000000000000000000..8565bcf4f7390e5413f845dfa832a7443d3bc8b9 --- /dev/null +++ b/sci_volume_data/skull/task_description.txt @@ -0,0 +1,16 @@ +Task: + +Load the Skull dataset from "skull/data/skull_256x256x256_uint8.raw", the information about this dataset: +Skull +Description: Rotational C-arm x-ray scan of phantom of a human skull. +Data Type: uint8 +Data Byte Order: little Endian +Data Spacing: 1x1x1 +Data Extent: 256x256x256 +Data loading is very important, make sure you correctly load the dataset according to their features. + +Then visualize it and extract isosurfaces to reveal the internal structures. Create one isosurface for soft tissue (lower isovalue, color: red, opacity: 0.3) and another for bone/dense structures (higher isovalue, color: white, opacity: 0.8). + +Please think step by step and make sure to fulfill all the visualization goals mentioned above. + +Finally, save the paraview state as "skull/results/skull.pvsm" \ No newline at end of file diff --git a/sci_volume_data/skull/visualization_goals.txt b/sci_volume_data/skull/visualization_goals.txt new file mode 100644 index 0000000000000000000000000000000000000000..3a03db23bd158fdef97588f69c6e739fbdb1606c --- /dev/null +++ b/sci_volume_data/skull/visualization_goals.txt @@ -0,0 +1,7 @@ +1. Overall Visualization Goal: How well does the result reveal the anatomical structures in the Skull dataset? + +2. Does the visualization clearly distinguish between different tissue types or density regions? + +3. Are the isosurfaces positioned at appropriate values to highlight key features? + +4. Is the color scheme and opacity appropriate for medical visualization? \ No newline at end of file diff --git a/sci_volume_data/statue_leg/data/statue_leg.txt b/sci_volume_data/statue_leg/data/statue_leg.txt new file mode 100644 index 0000000000000000000000000000000000000000..ef1b05fe518e8f6364dfb25fd93d11201435a559 --- /dev/null +++ b/sci_volume_data/statue_leg/data/statue_leg.txt @@ -0,0 +1,6 @@ +Leg of Statue +Description: CT scan of a leg of a bronze statue. +Data Type: uint8 +Data Byte Order: little Endian +Data Spacing: 1x1x4 +Data Extent: 341x341x93 diff --git a/sci_volume_data/statue_leg/task_description.txt b/sci_volume_data/statue_leg/task_description.txt new file mode 100644 index 0000000000000000000000000000000000000000..a06d2a01bf22e0c5429d4d5cc4d296408a5bff97 --- /dev/null +++ b/sci_volume_data/statue_leg/task_description.txt @@ -0,0 +1,16 @@ +Task: + +Load the Leg of Statue dataset from "statue_leg/data/statue_leg_341x341x93_uint8.raw", the information about this dataset: +Leg of Statue +Description: CT scan of a leg of a bronze statue. +Data Type: uint8 +Data Byte Order: little Endian +Data Spacing: 1x1x4 +Data Extent: 341x341x93 +Data loading is very important, make sure you correctly load the dataset according to their features. + +Then visualize it and extract isosurfaces to reveal the internal structures. Create one isosurface for soft tissue (lower isovalue, color: red, opacity: 0.3) and another for bone/dense structures (higher isovalue, color: white, opacity: 0.8). + +Please think step by step and make sure to fulfill all the visualization goals mentioned above. + +Finally, save the paraview state as "statue_leg/results/statue_leg.pvsm" \ No newline at end of file diff --git a/sci_volume_data/statue_leg/visualization_goals.txt b/sci_volume_data/statue_leg/visualization_goals.txt new file mode 100644 index 0000000000000000000000000000000000000000..729bef9d086aff5d0a3a973d94d8e618ef02bf5e --- /dev/null +++ b/sci_volume_data/statue_leg/visualization_goals.txt @@ -0,0 +1,7 @@ +1. Overall Visualization Goal: How well does the result reveal the anatomical structures in the Leg of Statue dataset? + +2. Does the visualization clearly distinguish between different tissue types or density regions? + +3. Are the isosurfaces positioned at appropriate values to highlight key features? + +4. Is the color scheme and opacity appropriate for medical visualization? \ No newline at end of file diff --git a/sci_volume_data/stent/data/stent.txt b/sci_volume_data/stent/data/stent.txt new file mode 100644 index 0000000000000000000000000000000000000000..1996cf3a985ab88772e2795a99efdd7d76d5c299 --- /dev/null +++ b/sci_volume_data/stent/data/stent.txt @@ -0,0 +1,6 @@ +Stented Abdominal Aorta +Description: CT Scan of the abdomen and pelvis. The dataset contains also a stent in the abdominal aorta. No contrast agent was used to enhance the blood vessels. +Data Type: uint16 +Data Byte Order: little Endian +Data Spacing: 0.8398x0.8398x3.2 +Data Extent: 512x512x174 diff --git a/sci_volume_data/stent/task_description.txt b/sci_volume_data/stent/task_description.txt new file mode 100644 index 0000000000000000000000000000000000000000..7a71d5120e80df58f4449affc8311a106c0b1546 --- /dev/null +++ b/sci_volume_data/stent/task_description.txt @@ -0,0 +1,16 @@ +Task: + +Load the Stented Abdominal Aorta dataset from "stent/data/stent_512x512x174_uint16.raw", the information about this dataset: +Stented Abdominal Aorta +Description: CT Scan of the abdomen and pelvis. The dataset contains also a stent in the abdominal aorta. No contrast agent was used to enhance the blood vessels. +Data Type: uint16 +Data Byte Order: little Endian +Data Spacing: 0.8398x0.8398x3.2 +Data Extent: 512x512x174 +Data loading is very important, make sure you correctly load the dataset according to their features. + +Then visualize it and extract isosurfaces to reveal the internal structures. Create one isosurface for soft tissue (lower isovalue, color: red, opacity: 0.3) and another for bone/dense structures (higher isovalue, color: white, opacity: 0.8). + +Please think step by step and make sure to fulfill all the visualization goals mentioned above. + +Finally, save the paraview state as "stent/results/stent.pvsm" \ No newline at end of file diff --git a/sci_volume_data/stent/visualization_goals.txt b/sci_volume_data/stent/visualization_goals.txt new file mode 100644 index 0000000000000000000000000000000000000000..7577e368ac64ffb1f734c5f708aa722dd2262161 --- /dev/null +++ b/sci_volume_data/stent/visualization_goals.txt @@ -0,0 +1,7 @@ +1. Overall Visualization Goal: How well does the result reveal the anatomical structures in the Stented Abdominal Aorta dataset? + +2. Does the visualization clearly distinguish between different tissue types or density regions? + +3. Are the isosurfaces positioned at appropriate values to highlight key features? + +4. Is the color scheme and opacity appropriate for medical visualization? \ No newline at end of file diff --git a/sci_volume_data/supernova/.DS_Store b/sci_volume_data/supernova/.DS_Store new file mode 100644 index 0000000000000000000000000000000000000000..fa801660695b481f75f1e5c148cf62f7e970b921 Binary files /dev/null and b/sci_volume_data/supernova/.DS_Store differ diff --git a/sci_volume_data/supernova/GS/gs_diagonal_view.png b/sci_volume_data/supernova/GS/gs_diagonal_view.png new file mode 100644 index 0000000000000000000000000000000000000000..ff1bc21fd5de24cf82bddd16abd5a7d61e35caf9 --- /dev/null +++ b/sci_volume_data/supernova/GS/gs_diagonal_view.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c09397358f712ef4da2a8ba31f3a6d474df7869c256fcc5d1231222338b1ef98 +size 278103 diff --git a/sci_volume_data/supernova/GS/gs_front_view.png b/sci_volume_data/supernova/GS/gs_front_view.png new file mode 100644 index 0000000000000000000000000000000000000000..5a6b3d0f57a415d9813416be17a3850f07cb9504 --- /dev/null +++ b/sci_volume_data/supernova/GS/gs_front_view.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a4f79f852d76d0267e09517687c86b7ce4892b632ccdd078f524fd10bda8d796 +size 274309 diff --git a/sci_volume_data/supernova/GS/gs_side_view.png b/sci_volume_data/supernova/GS/gs_side_view.png new file mode 100644 index 0000000000000000000000000000000000000000..440622d048185791b043fd0fc9eff02d19a13b28 --- /dev/null +++ b/sci_volume_data/supernova/GS/gs_side_view.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:73d10622c11fa07dc2c30a9215e04cd0f7f32588fbbd3596baac96b0b44da3f1 +size 319932 diff --git a/sci_volume_data/supernova/GS/supernova_gs.pvsm b/sci_volume_data/supernova/GS/supernova_gs.pvsm new file mode 100644 index 0000000000000000000000000000000000000000..65805d0a47d99bcf7765134ede3d2cfbf9dce0a7 --- /dev/null +++ b/sci_volume_data/supernova/GS/supernova_gs.pvsm @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:22c09e249d30b845f08876fc48c05f0f338929e7c8f7a8e438f142900fa0dc11 +size 475414 diff --git a/sci_volume_data/supernova/data/supernova.txt b/sci_volume_data/supernova/data/supernova.txt new file mode 100644 index 0000000000000000000000000000000000000000..d991143c70fde3810e58221ec6daac3dfded6aed --- /dev/null +++ b/sci_volume_data/supernova/data/supernova.txt @@ -0,0 +1,5 @@ +Supernova (Scalar) +Data Scalar Type: float +Data Byte Order: little Endian +Data Spacing: 1x1x1 +Data Extent: 256x256x256 \ No newline at end of file diff --git a/sci_volume_data/supernova/task_description.txt b/sci_volume_data/supernova/task_description.txt new file mode 100644 index 0000000000000000000000000000000000000000..b1be0883c589229fdd424525d154eb03118913c2 --- /dev/null +++ b/sci_volume_data/supernova/task_description.txt @@ -0,0 +1,15 @@ +Task: + +Load the supernova dataset from "supernova/data/supernova_256x256x256_float32.raw", the information about this dataset: +Supernova (Scalar) +Data Scalar Type: float +Data Byte Order: little Endian +Data Spacing: 1x1x1 +Data Extent: 256x256x256 +Data loading is very important, make sure you correctly load the dataset according to their features. + +Then visualize it and extract two isosurfaces. One of them use color red, showing areas with low density (isovalue 40 and opacity 0.4), while the other use color blue, showing areas with high density (isovalue 150 and opacity 0.8). + +Please think step by step and make sure to fulfill all the visualization goals mentioned above. Only make the two isosurfaces visible. + +Finally, save the paraview state as "supernova/results/supernova.pvsm" \ No newline at end of file diff --git a/sci_volume_data/supernova/visualization_goals.txt b/sci_volume_data/supernova/visualization_goals.txt new file mode 100644 index 0000000000000000000000000000000000000000..a1a31d9c761c2d9c3b8edd1e431bc869d5752a59 --- /dev/null +++ b/sci_volume_data/supernova/visualization_goals.txt @@ -0,0 +1,5 @@ +1. Overall Visualization Goal: How well does the result achieve the overall goal of showing the supernova structure with two distinct isosurfaces representing different density regions? + +2. Does the red isosurface show low density areas (outside regions) with lower opacity? + +3. Does the blue isosurface show high density areas (inside regions) with higher opacity? \ No newline at end of file diff --git a/sci_volume_data/tacc_turbulence/data/tacc_turbulence.txt b/sci_volume_data/tacc_turbulence/data/tacc_turbulence.txt new file mode 100644 index 0000000000000000000000000000000000000000..920a08434839a22dbc4f26a407f3c25344b60d66 --- /dev/null +++ b/sci_volume_data/tacc_turbulence/data/tacc_turbulence.txt @@ -0,0 +1,6 @@ +Isotropic Turbulence +Description: The dataset represents a time step from an isotropic turbulence simulation. A single variable, enstrophy, is represented on a Cartesian grid. +Data Type: float32 +Data Byte Order: little Endian +Data Spacing: 1x1x1 +Data Extent: 256x256x256 diff --git a/sci_volume_data/tacc_turbulence/task_description.txt b/sci_volume_data/tacc_turbulence/task_description.txt new file mode 100644 index 0000000000000000000000000000000000000000..4562250083d4b226e04aadbbf5332a2c78057b61 --- /dev/null +++ b/sci_volume_data/tacc_turbulence/task_description.txt @@ -0,0 +1,16 @@ +Task: + +Load the Isotropic Turbulence dataset from "tacc_turbulence/data/tacc_turbulence_256x256x256_float32.raw", the information about this dataset: +Isotropic Turbulence +Description: The dataset represents a time step from an isotropic turbulence simulation. A single variable, enstrophy, is represented on a Cartesian grid. +Data Type: float32 +Data Byte Order: little Endian +Data Spacing: 1x1x1 +Data Extent: 256x256x256 +Data loading is very important, make sure you correctly load the dataset according to their features. + +Then visualize the flow field. Apply volume rendering with a suitable color map to show the data distribution. Add streamlines or vectors if vector data is available. + +Please think step by step and make sure to fulfill all the visualization goals mentioned above. + +Finally, save the paraview state as "tacc_turbulence/results/tacc_turbulence.pvsm" \ No newline at end of file diff --git a/sci_volume_data/tacc_turbulence/visualization_goals.txt b/sci_volume_data/tacc_turbulence/visualization_goals.txt new file mode 100644 index 0000000000000000000000000000000000000000..e6716ba654c42326743ba42711f8267603d19564 --- /dev/null +++ b/sci_volume_data/tacc_turbulence/visualization_goals.txt @@ -0,0 +1,7 @@ +1. Overall Visualization Goal: How well does the result show the flow patterns or simulation dynamics in the Isotropic Turbulence dataset? + +2. Does the volume rendering effectively show the data distribution and gradients? + +3. Are flow features (vortices, boundaries, etc.) clearly visible? + +4. Is the color map appropriate for the physical quantity being visualized? \ No newline at end of file diff --git a/sci_volume_data/tooth/data/tooth.txt b/sci_volume_data/tooth/data/tooth.txt new file mode 100644 index 0000000000000000000000000000000000000000..37e7ff77cbe05b2dbcea438bdffff96f2aca12f3 --- /dev/null +++ b/sci_volume_data/tooth/data/tooth.txt @@ -0,0 +1,6 @@ +Tooth +Description: +Data Type: uint8 +Data Byte Order: little Endian +Data Spacing: 1x1x1 +Data Extent: 103x94x161 diff --git a/sci_volume_data/tooth/task_description.txt b/sci_volume_data/tooth/task_description.txt new file mode 100644 index 0000000000000000000000000000000000000000..6aaa3787b7cf4afbed22cbea98819b35336452e7 --- /dev/null +++ b/sci_volume_data/tooth/task_description.txt @@ -0,0 +1,19 @@ +Task: + +Load the Tooth dataset from "tooth/data/tooth_103x94x161_uint8.raw", the information about this dataset: +Tooth +Description: +Data Type: uint8 +Data Byte Order: little Endian +Data Spacing: 1x1x1 +Data Extent: 103x94x161 +Data loading is very important, make sure you correctly load the dataset according to their features. + +Then visualize it using appropriate techniques: +1. Apply volume rendering with a suitable transfer function to reveal internal structures +2. Extract at least one meaningful isosurface +3. Choose appropriate colors and opacity values for clarity + +Please think step by step and make sure to fulfill all the visualization goals mentioned above. + +Finally, save the paraview state as "tooth/results/tooth.pvsm" \ No newline at end of file diff --git a/sci_volume_data/tooth/visualization_goals.txt b/sci_volume_data/tooth/visualization_goals.txt new file mode 100644 index 0000000000000000000000000000000000000000..f71d2b1008d0515906736d8388c095338511065b --- /dev/null +++ b/sci_volume_data/tooth/visualization_goals.txt @@ -0,0 +1,7 @@ +1. Overall Visualization Goal: How well does the result reveal the key features of the Tooth dataset? + +2. Does the volume rendering provide good insight into the internal structure? + +3. Are the isosurfaces placed at meaningful values? + +4. Is the overall visualization clear and informative? \ No newline at end of file diff --git a/sci_volume_data/tornado/.DS_Store b/sci_volume_data/tornado/.DS_Store new file mode 100644 index 0000000000000000000000000000000000000000..220b84e2d787de93f54ba6e2fa0463bc561e7a53 Binary files /dev/null and b/sci_volume_data/tornado/.DS_Store differ diff --git a/sci_volume_data/tornado/GS/.DS_Store b/sci_volume_data/tornado/GS/.DS_Store new file mode 100644 index 0000000000000000000000000000000000000000..83c588db58fbdbfa77ff940ca6a9ae4c88711de5 Binary files /dev/null and b/sci_volume_data/tornado/GS/.DS_Store differ diff --git a/sci_volume_data/tornado/GS/gs_diagonal_view.png b/sci_volume_data/tornado/GS/gs_diagonal_view.png new file mode 100644 index 0000000000000000000000000000000000000000..5888077d9d77a267624064323d19a5c97ea4a979 --- /dev/null +++ b/sci_volume_data/tornado/GS/gs_diagonal_view.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:49af2e632fd82221e826b74f9b88bfd8afb19d655d813b664c5f2da304a60a26 +size 484144 diff --git a/sci_volume_data/tornado/GS/gs_front_view.png b/sci_volume_data/tornado/GS/gs_front_view.png new file mode 100644 index 0000000000000000000000000000000000000000..b2646b3720bf044090480339dd6db565a9749f1d --- /dev/null +++ b/sci_volume_data/tornado/GS/gs_front_view.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:812375bd3867ebfa42188641ed2d94408d64d6be93387dd7e345a90a54dd8232 +size 597258 diff --git a/sci_volume_data/tornado/GS/gs_side_view.png b/sci_volume_data/tornado/GS/gs_side_view.png new file mode 100644 index 0000000000000000000000000000000000000000..28a8f007c5fe310ee66a3961280387f36d8caa87 --- /dev/null +++ b/sci_volume_data/tornado/GS/gs_side_view.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:16904f34f00c8d366ea2ba16e6fa1576b65fdd9b249f2dca75af7875b10af050 +size 311675 diff --git a/sci_volume_data/tornado/GS/tornado_gs.pvsm b/sci_volume_data/tornado/GS/tornado_gs.pvsm new file mode 100644 index 0000000000000000000000000000000000000000..16201657bf71265f2e338f29988bbaa0aa7cf6cc --- /dev/null +++ b/sci_volume_data/tornado/GS/tornado_gs.pvsm @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d0925273b0be89973865699b20ca9bc92214a10c0d69aa785bc7366ac96ec3e3 +size 597880 diff --git a/sci_volume_data/tornado/data/tornado.txt b/sci_volume_data/tornado/data/tornado.txt new file mode 100755 index 0000000000000000000000000000000000000000..38f9d225af8dec7692aafe5a3628aa1cfd86f270 --- /dev/null +++ b/sci_volume_data/tornado/data/tornado.txt @@ -0,0 +1,5 @@ +Tornado (Vector) +Data Scalar Type: float +Data Byte Order: little Endian +Data Extent: 64x64x64 +Number of Scalar Components: 3 \ No newline at end of file diff --git a/sci_volume_data/tornado/task_description.txt b/sci_volume_data/tornado/task_description.txt new file mode 100644 index 0000000000000000000000000000000000000000..84d74157ead30e5cb0a9119988f1df56caba090e --- /dev/null +++ b/sci_volume_data/tornado/task_description.txt @@ -0,0 +1,19 @@ +Task: + +Load the tornado dataset from "tornado/data/tornado_64x64x64_float32_scalar3.raw", the information about this dataset: +Tornado (Vector) +Data Scalar Type: float +Data Byte Order: little Endian +Data Extent: 64x64x64 +Number of Scalar Components: 3 +Data loading is very important, make sure you correctly load the dataset according to their features. + +Add a “glyph” filter under the tornado data to display velocity glyph, set an appropriate “Scale Factor” so the glyphs are visible. + +Then add a “stream tracer” filter under the tornado data to generate streamlines. Choose “Point Cloud” as “Seed Type”, and do not show sphere. + +Add a “tube” filter under the stream tracer you just created to generate tubes for visualizing the streamlines. Set an appropriate radius. Make the stream tracer invisible and the tube visible. At last, render the streamlines as tubes. + +Please think step by step and make sure to fulfill all the visualization goals mentioned above. + +Finally, save the paraview state as "tornado/results/tornado.pvsm" \ No newline at end of file diff --git a/sci_volume_data/tornado/visualization_goals.txt b/sci_volume_data/tornado/visualization_goals.txt new file mode 100644 index 0000000000000000000000000000000000000000..9383d8f9b6518512955ac3c9476b19c3ac75c574 --- /dev/null +++ b/sci_volume_data/tornado/visualization_goals.txt @@ -0,0 +1,7 @@ +1. Overall Visualization Goal: How well does the result achieve the overall goal of showing tornado flow patterns with glyphs and streamlines? + +2. Glyph Visualization: Does the result show velocity glyphs that are appropriately sized and visible? + +3. Streamline Visualization: Does the result show streamlines that follow the flow patterns effectively? + +4. Tube Rendering: Are the streamlines rendered as tubes with appropriate thickness? \ No newline at end of file diff --git a/sci_volume_data/vertebra/data/vertebra.txt b/sci_volume_data/vertebra/data/vertebra.txt new file mode 100644 index 0000000000000000000000000000000000000000..73ad90c2d9fcf4507c49d53383034fe41716edde --- /dev/null +++ b/sci_volume_data/vertebra/data/vertebra.txt @@ -0,0 +1,6 @@ +Head Aneurism +Description: Rotational angiography scan of a head with an aneurysm. Only contrasted blood vessels are visible. +Data Type: uint16 +Data Byte Order: little Endian +Data Spacing: 0.1953x0.1953x0.1953 +Data Extent: 512x512x512 diff --git a/sci_volume_data/vertebra/task_description.txt b/sci_volume_data/vertebra/task_description.txt new file mode 100644 index 0000000000000000000000000000000000000000..56e0fee88cd7c88499cbf39bf17b83e57817737e --- /dev/null +++ b/sci_volume_data/vertebra/task_description.txt @@ -0,0 +1,16 @@ +Task: + +Load the Head Aneurism dataset from "vertebra/data/vertebra_512x512x512_uint16.raw", the information about this dataset: +Head Aneurism +Description: Rotational angiography scan of a head with an aneurysm. Only contrasted blood vessels are visible. +Data Type: uint16 +Data Byte Order: little Endian +Data Spacing: 0.1953x0.1953x0.1953 +Data Extent: 512x512x512 +Data loading is very important, make sure you correctly load the dataset according to their features. + +Then visualize it and extract isosurfaces to reveal the internal structures. Create one isosurface for soft tissue (lower isovalue, color: red, opacity: 0.3) and another for bone/dense structures (higher isovalue, color: white, opacity: 0.8). + +Please think step by step and make sure to fulfill all the visualization goals mentioned above. + +Finally, save the paraview state as "vertebra/results/vertebra.pvsm" \ No newline at end of file diff --git a/sci_volume_data/vertebra/visualization_goals.txt b/sci_volume_data/vertebra/visualization_goals.txt new file mode 100644 index 0000000000000000000000000000000000000000..94eb39a88077aa6fb757314d4a46a2ce37c5eef9 --- /dev/null +++ b/sci_volume_data/vertebra/visualization_goals.txt @@ -0,0 +1,7 @@ +1. Overall Visualization Goal: How well does the result reveal the anatomical structures in the Head Aneurism dataset? + +2. Does the visualization clearly distinguish between different tissue types or density regions? + +3. Are the isosurfaces positioned at appropriate values to highlight key features? + +4. Is the color scheme and opacity appropriate for medical visualization? \ No newline at end of file diff --git a/sci_volume_data/vis_male/data/vis_male.txt b/sci_volume_data/vis_male/data/vis_male.txt new file mode 100644 index 0000000000000000000000000000000000000000..4589b46e147a793f3df0f9416ae90e54bb545fb3 --- /dev/null +++ b/sci_volume_data/vis_male/data/vis_male.txt @@ -0,0 +1,6 @@ +Head (Visible Male) +Description: Male head scan +Data Type: uint8 +Data Byte Order: little Endian +Data Spacing: 1.57774x0.995861x1.00797 +Data Extent: 128x256x256 diff --git a/sci_volume_data/vis_male/task_description.txt b/sci_volume_data/vis_male/task_description.txt new file mode 100644 index 0000000000000000000000000000000000000000..3638147a1e9fb39a638bd6835668d0534841aa91 --- /dev/null +++ b/sci_volume_data/vis_male/task_description.txt @@ -0,0 +1,16 @@ +Task: + +Load the Head (Visible Male) dataset from "vis_male/data/vis_male_128x256x256_uint8.raw", the information about this dataset: +Head (Visible Male) +Description: Male head scan +Data Type: uint8 +Data Byte Order: little Endian +Data Spacing: 1.57774x0.995861x1.00797 +Data Extent: 128x256x256 +Data loading is very important, make sure you correctly load the dataset according to their features. + +Then visualize it and extract isosurfaces to reveal the internal structures. Create one isosurface for soft tissue (lower isovalue, color: red, opacity: 0.3) and another for bone/dense structures (higher isovalue, color: white, opacity: 0.8). + +Please think step by step and make sure to fulfill all the visualization goals mentioned above. + +Finally, save the paraview state as "vis_male/results/vis_male.pvsm" \ No newline at end of file diff --git a/sci_volume_data/vis_male/visualization_goals.txt b/sci_volume_data/vis_male/visualization_goals.txt new file mode 100644 index 0000000000000000000000000000000000000000..e761554c5ffe515aa8f39741970f216e9adab0e9 --- /dev/null +++ b/sci_volume_data/vis_male/visualization_goals.txt @@ -0,0 +1,7 @@ +1. Overall Visualization Goal: How well does the result reveal the anatomical structures in the Head (Visible Male) dataset? + +2. Does the visualization clearly distinguish between different tissue types or density regions? + +3. Are the isosurfaces positioned at appropriate values to highlight key features? + +4. Is the color scheme and opacity appropriate for medical visualization? \ No newline at end of file diff --git a/sci_volume_data/zeiss/data/zeiss.txt b/sci_volume_data/zeiss/data/zeiss.txt new file mode 100644 index 0000000000000000000000000000000000000000..37f9ff138bd136fe46bd47338da6a01a763ca9a4 --- /dev/null +++ b/sci_volume_data/zeiss/data/zeiss.txt @@ -0,0 +1,6 @@ +Zeiss +Description: Car part reconstructed from projections. +Data Type: uint8 +Data Byte Order: little Endian +Data Spacing: 1x1x1 +Data Extent: 680x680x680 diff --git a/sci_volume_data/zeiss/task_description.txt b/sci_volume_data/zeiss/task_description.txt new file mode 100644 index 0000000000000000000000000000000000000000..3c9403dcb20d8f2a3288df92dda2f19ffb226564 --- /dev/null +++ b/sci_volume_data/zeiss/task_description.txt @@ -0,0 +1,16 @@ +Task: + +Load the Zeiss dataset from "zeiss/data/zeiss_680x680x680_uint8.raw", the information about this dataset: +Zeiss +Description: Car part reconstructed from projections. +Data Type: uint8 +Data Byte Order: little Endian +Data Spacing: 1x1x1 +Data Extent: 680x680x680 +Data loading is very important, make sure you correctly load the dataset according to their features. + +Then visualize it and extract isosurfaces to reveal the internal structures. Create one isosurface for soft tissue (lower isovalue, color: red, opacity: 0.3) and another for bone/dense structures (higher isovalue, color: white, opacity: 0.8). + +Please think step by step and make sure to fulfill all the visualization goals mentioned above. + +Finally, save the paraview state as "zeiss/results/zeiss.pvsm" \ No newline at end of file diff --git a/sci_volume_data/zeiss/visualization_goals.txt b/sci_volume_data/zeiss/visualization_goals.txt new file mode 100644 index 0000000000000000000000000000000000000000..e5c652d337a350324f8faa0c3a5da11c4519672f --- /dev/null +++ b/sci_volume_data/zeiss/visualization_goals.txt @@ -0,0 +1,7 @@ +1. Overall Visualization Goal: How well does the result reveal the anatomical structures in the Zeiss dataset? + +2. Does the visualization clearly distinguish between different tissue types or density regions? + +3. Are the isosurfaces positioned at appropriate values to highlight key features? + +4. Is the color scheme and opacity appropriate for medical visualization? \ No newline at end of file diff --git a/scientific_vis_taxonomy.md b/scientific_vis_taxonomy.md new file mode 100644 index 0000000000000000000000000000000000000000..749d831922de12f15a4c3156248e332fb9ac0ec8 --- /dev/null +++ b/scientific_vis_taxonomy.md @@ -0,0 +1,263 @@ +# Scientific Visualization Task Taxonomy for Spatial and Volumetric Data + +## Executive Summary + +Scientific visualization differs fundamentally from information visualization in its focus on **spatial data inherent to physical phenomena**. While information visualization deals with abstract data requiring spatial mappings, scientific visualization works with data that has intrinsic 3D/4D spatial structure - from molecular dynamics simulations to atmospheric models. This taxonomy organizes visualization tasks specific to scalar fields, vector fields, and tensor fields, emphasizing the unique challenges of extracting meaningful features from continuous volumetric data. + +## I. Data-Centric Task Categories + +### 1. Scalar Field Visualization Tasks + +**Isosurface Extraction** +- *Task*: Extract surfaces of constant value from 3D scalar fields +- *Techniques*: Marching cubes, marching tetrahedra, dual contouring +- *What's extracted*: Material boundaries, pressure fronts, temperature interfaces, density transitions +- *Applications*: Medical imaging (organ boundaries), meteorology (pressure systems), materials science (phase boundaries) + +**Volume Rendering Tasks** +- *Direct Volume Rendering*: Map scalar values to color/opacity without intermediate geometry +- *Transfer Function Design*: Classify tissues, materials, or phenomena by scalar ranges +- *Multi-dimensional Transfer Functions*: Use gradient magnitude for boundary enhancement +- *What's extracted*: Soft tissue boundaries, density variations, concentration distributions + +**Critical Point Analysis** +- *Task*: Identify local extrema and saddle points in scalar fields +- *Types*: Minima, maxima, saddle points (2D), monkey saddles (3D) +- *Applications*: Terrain analysis, pressure system identification, chemical reaction pathways + +### 2. Vector Field Visualization Tasks + +**Flow Topology Extraction** +- *Critical Points*: Sources, sinks, saddles, centers, foci (2D); additional types in 3D +- *Separatrices*: Streamlines/surfaces dividing flow into distinct regions +- *Periodic Orbits*: Closed streamlines indicating recirculation +- *What's revealed*: Flow structure, stagnation points, vortex cores, attachment/separation lines + +**Integral Curve Computation** +- *Streamlines*: Instantaneous flow direction (steady fields) +- *Pathlines*: Particle trajectories over time (unsteady fields) +- *Streaklines*: Connection of particles passing through same point +- *Timelines*: Evolution of material lines +- *Applications*: Aerodynamics, blood flow analysis, ocean currents + +**Vortex Detection and Characterization** +- *Lambda2 Method*: Pressure minimum detection +- *Q-Criterion*: Balance of rotation vs. strain +- *Swirling Strength*: Complex eigenvalue analysis +- *What's extracted*: Vortex cores, strength, orientation, evolution + +### 3. Tensor Field Visualization Tasks + +**Diffusion Tensor Imaging (DTI)** +- *Fiber Tracking*: Neural pathway reconstruction +- *Fractional Anisotropy*: Quantify directional dependence +- *Principal Direction Extraction*: Dominant diffusion directions + +**Stress/Strain Tensor Analysis** +- *Principal Stress Directions*: Maximum/minimum stress orientations +- *Von Mises Stress*: Failure prediction in materials +- *Tensor Topology*: Degenerate point classification + +## II. Feature-Based Task Hierarchies + +### Level 1: Detection Tasks +- **Identify** presence of features (vortices, shocks, boundaries) +- **Locate** spatial positions of features +- **Count** number of distinct features +- **Classify** feature types (e.g., vortex vs. sink) + +### Level 2: Quantification Tasks +- **Measure** feature properties (size, strength, orientation) +- **Compute** derived quantities (vorticity, divergence, helicity) +- **Evaluate** feature quality metrics (confidence, uncertainty) + +### Level 3: Tracking Tasks +- **Correspond** features across time steps +- **Track** feature evolution (birth, death, splitting, merging) +- **Predict** future feature behavior +- **Analyze** feature lifecycles + +### Level 4: Comparison Tasks +- **Compare** features across datasets +- **Correlate** features with other phenomena +- **Validate** against experimental/observational data + +## III. Spatial Analysis Tasks + +### Region-Based Tasks +**Segmentation** +- Watershed segmentation of scalar fields +- Region growing from seed points +- Level set methods for boundary evolution + +**Clustering** +- Spatial clustering of similar values +- Feature-based clustering (e.g., vortex regions) +- Multi-field clustering + +### Boundary Tasks +**Interface Tracking** +- Material boundary evolution +- Shock front propagation +- Phase transition surfaces + +**Surface Extraction** +- Interval volumes between isosurfaces +- Stream surfaces from vector fields +- Separation surfaces in flows + +## IV. Multi-Field and Multi-Resolution Tasks + +### Correlation Analysis +- **Cross-field relationships**: Temperature-pressure correlations +- **Field alignment**: Vector field alignment with scalar gradients +- **Causal relationships**: Identify driving phenomena + +### Scale-Space Analysis +- **Multi-resolution feature extraction**: Features at different scales +- **Scale-space tracking**: Feature persistence across scales +- **Hierarchical representations**: Coarse-to-fine exploration + +## V. Domain-Specific Task Taxonomies + +### Computational Fluid Dynamics (CFD) +**Turbulence Analysis** +- Energy cascade visualization +- Coherent structure identification +- Reynolds stress tensor analysis + +**Boundary Layer Analysis** +- Separation point detection +- Transition zone identification +- Wall shear stress patterns + +### Medical Imaging +**Anatomical Structure Extraction** +- Organ segmentation +- Vessel tree extraction +- Tumor boundary delineation + +**Functional Analysis** +- Blood flow patterns +- Diffusion tensor tractography +- Perfusion analysis + +### Climate and Weather +**Atmospheric Feature Detection** +- Cyclone/anticyclone identification +- Jet stream extraction +- Frontal system analysis + +**Ocean Current Analysis** +- Eddy detection and tracking +- Upwelling/downwelling regions +- Thermocline visualization + +### Molecular Dynamics +**Structural Analysis** +- Secondary structure identification +- Binding site detection +- Conformational changes + +**Interaction Analysis** +- Hydrogen bond networks +- Hydrophobic interactions +- Electrostatic potential surfaces + +## VI. Interaction Tasks for 3D/4D Data + +### Navigation Tasks +- **Fly-through**: Navigate inside volume +- **Orbit**: Examine from outside +- **Slice-based**: 2D cross-sections +- **Time navigation**: Temporal exploration + +### Manipulation Tasks +- **Clipping**: Remove occluding regions +- **Probing**: Query values at points +- **Seeding**: Place streamlines/particles +- **Annotation**: Mark features of interest + +### Selection Tasks +- **Volume selection**: 3D region of interest +- **Feature selection**: Individual structures +- **Threshold selection**: Isosurface values +- **Transfer function editing**: Classification adjustment + +## VII. Uncertainty and Validation Tasks + +### Uncertainty Visualization +- **Scalar uncertainty**: Error bars, confidence volumes +- **Vector uncertainty**: Cone glyphs, probability fields +- **Topology uncertainty**: Critical point stability + +### Validation Tasks +- **Ground truth comparison**: Experimental validation +- **Ensemble analysis**: Multiple simulation runs +- **Convergence analysis**: Numerical accuracy + +## VIII. Performance-Critical Tasks + +### Real-Time Requirements +- **Interactive exploration**: >30 fps navigation +- **Progressive refinement**: Coarse-to-fine rendering +- **Level-of-detail**: Adaptive resolution + +### Large Data Handling +- **Out-of-core**: Data larger than memory +- **Parallel processing**: Distributed visualization +- **Data reduction**: Feature-based compression + +## IX. Extraction Outcomes + +### Geometric Primitives +- Points (critical points, voxels) +- Lines (streamlines, vortex cores) +- Surfaces (isosurfaces, stream surfaces) +- Volumes (interval volumes, vortex regions) + +### Quantitative Measures +- Scalar statistics (mean, variance, extrema) +- Vector quantities (flux, circulation, helicity) +- Topological numbers (Euler characteristic, genus) +- Feature attributes (size, strength, lifetime) + +### Structural Information +- Connectivity (skeleton, Reeb graph) +- Hierarchy (merge tree, contour tree) +- Relationships (spatial, temporal, causal) +- Patterns (symmetries, periodicities) + +## X. Task Complexity Levels + +### Low-Level Tasks (Milliseconds) +- Voxel sampling +- Gradient computation +- Local neighborhood operations + +### Mid-Level Tasks (Seconds) +- Isosurface extraction +- Streamline integration +- Local feature detection + +### High-Level Tasks (Minutes-Hours) +- Global topology computation +- Feature tracking over time +- Multi-field correlation analysis + +### Meta-Level Tasks (Hours-Days) +- Parameter space exploration +- Ensemble analysis +- Validation studies + +## Conclusion + +Scientific visualization tasks form a rich hierarchy from low-level geometric operations to high-level scientific discovery. Unlike information visualization's focus on abstract data mapping, scientific visualization must preserve and reveal the inherent spatial structure of physical phenomena. The unique challenges include: + +1. **Continuous-to-discrete**: Sampling and reconstruction issues +2. **3D occlusion**: Need for cutting, transparency, and focus+context +3. **Multi-scale phenomena**: Features at vastly different scales +4. **Temporal evolution**: 4D data with complex dynamics +5. **Computational intensity**: Massive datasets requiring parallel processing + +Success in scientific visualization requires careful orchestration of these tasks, from efficient low-level algorithms for volume rendering and isosurface extraction, through robust feature detection methods, to high-level scientific interpretation and validation. The field continues to evolve with advances in GPU computing, machine learning-assisted feature detection, and immersive visualization technologies, but the fundamental task taxonomy remains grounded in extracting meaningful structures from spatial scientific data.