File size: 16,271 Bytes
40d081b ea74617 40d081b ea74617 40d081b ea74617 40d081b 4fe94e5 40d081b 4fe94e5 0576ca9 0741ce8 4fe94e5 ea74617 4fe94e5 61e4efa 2590abd 0576ca9 4fe94e5 0576ca9 0741ce8 4fe94e5 61e4efa 4fe94e5 0576ca9 4fe94e5 0576ca9 0741ce8 4fe94e5 61e4efa 4fe94e5 0576ca9 4fe94e5 ea74617 4fe94e5 0576ca9 0741ce8 4fe94e5 8293ed0 4fe94e5 61e4efa 4fe94e5 0576ca9 4fe94e5 a721fc7 4fe94e5 ea74617 a721fc7 0576ca9 0741ce8 4fe94e5 61e4efa 4fe94e5 0576ca9 eee57cc 0741ce8 eee57cc 0741ce8 eee57cc 967a3ab eee57cc 967a3ab eee57cc 967a3ab eee57cc 0741ce8 eee57cc 967a3ab eee57cc 967a3ab eee57cc 0741ce8 eee57cc 967a3ab | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 | # Test Cases for SciVisAgentBench Topology Tasks
# This test evaluates the ability to complete specific visualization tasks
# with detailed requirements and evaluation criteria
# 1. QMCPACK
# Quantum Monte Carlo simulation of an unspecified field for an unspecified molecule. The data was taken from the 145th orbital.
# The data was accessed from the SDR bench (note: please cite https://sdrbench.github.io/)
# The data is released under the University of Illinois open source license.
- vars:
question: |
1. Please load the dataset from "QMCPACK/data/QMCPACK.vti".
2. Compute the critical points of the scalar field.
3. Save the critical points as "QMCPACK/results/{agent_mode}/QMCPACK.vtk" in legacy VTK format.
- The output should contain the critical points as point data
- Include an array called "CriticalType" that labels each point according to what type of critical type it is. Use the following convention:
* 0 for minima
* 1 for 1-saddles
* 2 for 2-saddles
* 3 for maxima
* 4 for degenerate critical points
- The point coordinates should be in index space (grid coordinates), not world coordinates
4. Analyze the visualization and answer the following questions:
Q1: How many index 1 saddles are there:
(A) 248 (B) 274 (C) 299 (D) 344
Q2: What is the type of critical point closest to coordinates (4,58,12):
(A) minimum (B) 1-saddle (C) 2-saddle (D) maximum
Save the answers to the analysis questions in plain text as "QMCPACK/results/{agent_mode}/answers.txt".
Do not save any files other than the specified result files.
assert:
- type: rule_based
eval_script: QMCPACK/GS/QMCPACK_eval.py
eval_function: evaluateQmcpackCriticalPoints
gs_file: QMCPACK/GS/QMCPACK_gs.vtk
rs_file: QMCPACK/results/{agent_mode}/QMCPACK.vtk
- type: llm-rubric
subtype: text
value: |
1. Q1 correct answer: (C)
2. Q2 correct answer: (D)
# 2. Brain
# Symmetric 2D 2x2 tensor field. The (1,1), (1,2) and (2,2) components are given by the arrays A, B, and D respectively (the (2,1) component is equal to the (1,2) component).
# This is a 2D slice of the diffusion tensor of an MRI scan of a brain.
# Specifically, to produce the data, we started by downloading the data from patient 23 from this study: https://www.nature.com/articles/s41597-021-01092-6.
# Then, we extracted the diffusion tensor. We discarded the (2,1) entry that was produced, and set values outside of the brain to 0.
# We then took the slice where Z=50 (zero indexed) and discarded components of the tensor that use Z to derive a 2x2 tensor.
- vars:
question: |
1. Load the file "brain/data/brain.vti". It is a symmetric tensor field, where the (1,1), (1,2) and (2,2) components of the tensor are respectively given by the arrays A, B, and D.
2. Compute degenerate points of the tensor field.
3. Save the degenerate points as "brain/results/{agent_mode}/brain.vtk" in legacy VTK format. Label the type of degenerate point for each point in an array called DegeneracyType. Use a value of 0 for trisectors and 1 for wedges.
4. Analyze the visualization and answer the following questions:
Q1: Are there more trisectors than wedges? (yes/no)
Q2: Out of all degenerate points, the sum of one point's coordinates is the highest. What is this highest sum, rounded to the nearest integer?
(A) 124 (B) 136 (C) 148 (D) 160
Save the answers to the analysis questions in plain text as "brain/results/{agent_mode}/answers.txt".
Do not save any files other than the specified result files.
assert:
- type: rule_based
eval_script: brain/GS/brain_eval.py
eval_function: evaluateDegeneratePoints
gs_file: brain/GS/brain_gs.vtk
rs_file: brain/results/{agent_mode}/brain.vtk
- type: llm-rubric
subtype: text
value: |
1. Q1 correct answer: yes
2. Q2 correct answer: (B)
# 3. Heated Cylinder
# "The dataset is a flow around a heated cylinder.
# The data was taken from the computer graphics lab at ETH Zurich: https://cgl.ethz.ch/research/visualization/data.php.
# We took time step 1000 of the "Heated Cylinder with Bossinesq Approximation" dataset.
# We computed the flow magnitude to produce a scalar field.
- vars:
question: |
1. Please load the file "cylinder/data/cylinder.vti"
2. Apply persistence simplification of 0.01 to the Speed field.
3. Compute the Morse-Smale segmentation of the simplified Speed field.
4. Save the Morse-Smale segmentation as "cylinder/results/{agent_mode}/cylinder.vti". It should have a point array called Partition. For each point x, the array "Partition" should store the id number of the region in the segmentation that x belongs to.
5. Analyze the visualization and answer the following questions:
Q1: How many unique partition regions are there?
(A) 152 (B) 163 (C) 174 (D) 185
Q2: How many points are in the largest partition region?
(A) 6879 (B) 7968 (C) 8796 (D) 9687
Save the answers to the analysis questions in plain text as "cylinder/results/{agent_mode}/answers.txt".
Do not save any files other than the specified result files.
assert:
- type: rule_based
eval_script: cylinder/GS/cylinder_eval.py
eval_function: evaluateMSSEgmentation
gs_file: cylinder/GS/cylinder_gs.vti
rs_file: cylinder/results/{agent_mode}/cylinder.vti
- type: llm-rubric
subtype: text
value: |
1. Q1 correct answer: (A)
2. Q2 correct answer: (D)
# 4. Hurricane Isabel
# Wind speed at each point in a 3D region for a single time step for hurricane Isabel.
# The data was accessed from the SDR Bench (please cite): https://sdrbench.github.io/.
# It, in-turn, came from the IEEE SciVis contest 2004: http://vis.computer.org/vis2004contest/data.html.
# To derive the field, we used the three components of the wind velocity to derive a wind speed scalar field.
# We truncated the data from 500x500x100 to 500x500x90 so that the land component would not be present in the field.
- vars:
question: |
1. Load the file "isabel/data/isabel.vti".
2. Apply persistent simplification to the field "sf" with a persistence threshold of 0.04
3. Compute the merge tree of the simplified field.
4. Save the nodes of the merge tree as "isabel/results/{agent_mode}/isabel_nodes.vtk" in legacy VTK format.
This file should have two point arrays. One should be called "CriticalType" and should store the type of critical point for each node.
It should follow the following convention: 0: minima. 1: 1-saddles. 2: 2-saddles. 3: maxima. 4: degenerate critical points.
The other point array should be called "Scalar" and should contain the scalar field value at each point in the merge tree.
5. Save the edges of the merge tree as "isabel/results/{agent_mode}/isabel_edges.vtk" in legacy VTK format.
The file should store each edge as a separate cell with type vtkLine.
6. Analyze the visualization and answer the following questions:
Q1: The parent node of the leaf (377, 265, 0) has coordinates (x,y,z). What is x+y+z?
(A) 627 (B) 854 (C) 992 (D) 1039
Q2: How many edges are there in the merge tree?
(A) 154 (B) 195 (C) 204 (D) 254
Q3: What is the highest scalar field value of a minimum, rounded to the nearest whole number?
(A) 12 (B) 26 (C) 31 (D) 58
Save the answers to the analysis questions in plain text as "isabel/results/{agent_mode}/answers.txt".
Do not save any files other than the specified result files.
assert:
- type: rule_based
eval_script: isabel/GS/isabel_eval.py
eval_function: evaluateMergetree
gs_file:
- isabel/GS/isabel_nodes_gs.vtk
- isabel/GS/isabel_edges_gs.vtk
rs_file:
- isabel/results/{agent_mode}/isabel_nodes.vtk
- isabel/results/{agent_mode}/isabel_edges.vtk
- type: llm-rubric
subtype: text
value: |
1. Q1 correct answer: (A)
2. Q2 correct answer: (B)
3. Q3 correct answer: (C)
# 5. Ocean Flow
# This is the 2x2 gradient tensor field of a slice of the Indian Ocean.
# This tensor field is derived from the Global Ocean Physics Reanalysis dataset from EU Copernicus Marine.
# The gradient was taken numerically. For exact specifics, please see (https://arxiv.org/pdf/2508.09235) page 12 (under appendix A, the Ocean dataset is described).
- vars:
question: |
1. Please load the asymmetric tensor field from "ocean/data/ocean.vti". The (1,1), (1,2), (2,1) and (2,2) entries are respectively given by the arrays A, B, C, and D
2. Compute the eigenvector partition of the dataset.
3. Save the degenerate points as "ocean/results/{agent_mode}/ocean_points.vtk" in legacy VTK format.
Include a point array called DegeneracyType which classifies each degenerate point.
It should have a value of 0 for trisectors and 1 for wedges.
4. Save the partition information from the eigenvector partition as "ocean/results/{agent_mode}/ocean_eigenvector.vti" as VTK image data.
It should have a point array called Partition that stores the region identifiers as follows: 0: W_{c,s}. 1: W_{r,s}. 2: W_{r,n}. 3: W_{c,n}
5. Compute the eigenvalue partition of the dataset.
6. Save the partition information from the eigenvalue partition as "ocean/results/{agent_mode}/ocean_eigenvalue.vti" as VTK image data.
It should have a point array called Partition that stores the region identifiers as follows: 0: positive scaling. 1: counterclockwise rotation.
2: negative scaling. 3: clockwise rotation. 4: anisotropic stretching.
7. Analyze the visualization and answer the following questions:
Q1: Are there more trisectors than wedges? (yes/no)
Q2: How many points have the most common classification in the eigenvector partition?
(A) 752342 (B) 802842 (C) 826348 (D) 994682
Q3: Which is the least common classification in the eigenvalue partition?
(A) Positive scaling (B) counterclockwise rotation (C) negative scaling (D) clockwise rotation
Save the answers to the analysis questions in plain text as "ocean/results/{agent_mode}/answers.txt".
Do not save any files other than the specified result files.
assert:
- type: rule_based
eval_script: ocean/GS/ocean_eval.py
eval_function: evaluate2DAsymmetricTFTopology
gs_file:
- ocean/GS/ocean_points_gs.vtk
- ocean/GS/ocean_eigenvector_gs.vti
- ocean/GS/ocean_eigenvalue_gs.vti
rs_file:
- ocean/results/{agent_mode}/ocean_points.vtk
- ocean/results/{agent_mode}/ocean_eigenvector.vti
- ocean/results/{agent_mode}/ocean_eigenvalue.vti
- type: llm-rubric
subtype: text
value: |
1. Q1 correct answer: no
2. Q2 correct answer: (C)
3. Q3 correct answer: (C)
# 6. noisyTerrain
# This dataset is a terrain with random scalar values added to create noise, originally from Julien Tierny. See https://github.com/topology-tool-kit/ttk-data.
- vars:
question: |
1. Load the dataset from "noisyTerrain/data/noisyTerrain.vtu".
2. Compute the persistence diagram on the scalar field named "Blend".
3. Apply a threshold to filter out pairs with persistence value less than 1.
4. Save the persistence diagram as "noisyTerrain/results/{agent_mode}/noisyTerrain.vtk" in legacy VTK format.
- The output should contain the points in the persistence diagram as point data, and each persistence pair is represented as a cell.
- Include the following three scalar arrays with the given names and purposes:
* "Birth" array: store the birth value of each pair.
* "Persistence" array: store the persistence value of each pair.
* "IsFinite" array: use 1 to mark finite persistence and 0 to mark infinite persistence.
Do not save any files other than the specified result files.
assert:
- type: rule_based
eval_script: noisyTerrain/GS/noisyTerrain_eval.py
eval_function: evaluateNoisyTerrainPersistenceDiagram
gs_file:
- noisyTerrain/GS/noisyTerrain_gs.vtk
rs_file:
- noisyTerrain/results/{agent_mode}/noisyTerrain.vtk
# 7. molecule
# This dataset contains electron density and reduced gradient for a simple Ethane-Diol molecule, which is originally from Roberto Alvarez Boto. See https://github.com/topology-tool-kit/ttk-data.
- vars:
question: |
1. Load the data file "molecule/data/molecule.vti".
2. Compute the Morse-Smale segmentation on the scalar field named "log(s)".
3. Save the Morse-Smale segmentation as "molecule/results/{agent_mode}/molecule.vti".
It should have a point array called "Segmentation".
For each point x, the array "Segmentation" should store the id number of the region in the segmentation that x belongs to.
Do not save any files other than the specified result files.
assert:
- type: rule_based
eval_script: molecule/GS/molecule_eval.py
eval_function: evaluateMoleculeSegmentation
gs_file:
- molecule/GS/molecule_gs.vti
rs_file:
- molecule/results/{agent_mode}/molecule.vti
# 8. moons
# This 2D data set is based on the scikit-learn clustering examples (see https://scikit-learn.org/stable/modules/clustering.html), which computes a density field using Gaussian Resampling on the original point cloud.
- vars:
question: |
1. Load the data file "moons/data/moons.vti".
2. Apply topological simplification to the field "SplatterValues" with a persistence threshold of 10.
3. Compute the Morse-Smale segmentation on the simplified scalar field.
4. Save only the Ascending Manifold as "moons/results/{agent_mode}/moons.vti".
It should have a point array called "AscendingManifold".
For each point x, the array "AscendingManifold" should store the id number of the region that x belongs to.
Do not save any files other than the specified result files.
assert:
- type: rule_based
eval_script: moons/GS/moons_eval.py
eval_function: evaluateMoonAscendingManifold
gs_file:
- moons/GS/moons_gs.vti
rs_file:
- moons/results/{agent_mode}/moons.vti
# 9. dragon
# The dataset is the scanned dragon model in the ttk-data GitHub repo (https://github.com/topology-tool-kit/ttk-data), originally from VisionAIR (VISION Advanced Infrastructure for Research).
- vars:
question: |
1. Load the dataset from "dragon/data/dragon.vtu".
2. Compute the Morse-Smale complex on the scalar field named "density". Make sure 1-Separatrices are computed.
3. Compute the critical points on the previous elevation scalar field.
4. Save the critical points as "dragon/results/{agent_mode}/dragon.vtk" in legacy VTK format.
- The output should contain the critical points as point dataset
- Include an array called "CriticalType" that labels each point according to what type of critical type it is. Use the following convention:
* 0 for minima
* 1 for 1-saddles
* 2 for 2-saddles
* 3 for maxima
- The point coordinates should be in world coordinates
Do not save any files other than the specified result files.
assert:
- type: rule_based
eval_script: dragon/GS/dragon_eval.py
eval_function: evaluateDragonCriticalPoints
gs_file:
- dragon/GS/dragon_gs.vtk
rs_file:
- dragon/results/{agent_mode}/dragon.vtk |