AbstractWe introduce a new approach to automatically extract an idealized logicalstructure from a parallel execution trace. We use this structure to defineintuitive metrics such as the lateness of a process involved in a parallelexecution. By analyzing and illustrating traces in terms of logical steps, weleverage a developer’s understanding of the happened-before relations ina parallel program. This technique can uncover dependency chains,elucidate communication patterns, and highlight sources and propagationof delays, all of which may be obscured in a traditional trace visualization.Extracting Logical StructureThe logical structure of a program is the ordering of events implied by thatprogram. We describe the logical structure by assigning a logical step toeach event.Structure extraction occurs in two phases:1. Partitioning related communication2. Step assignmentPartitioningPartitions represent non-overlapping application phases. If not predefined,we derive them from the trace:Matching sends and receives and communication handled by the sameMPI call must be related and thus in the same partition. When merged,this can create cycles in ordering:Communication partitions forming a cycle do not permit a partial order, sowe infer these partitions are related and merge them.In addition to merging due to orderingconstraints, we can optionally merge dueto behavioral assumptions. For example,in bulk synchronous codes we expecteach process to be active at somedistance in the partition graph.Step AssignmentEach partition is independently assigned steps based on two principles:1. Happened-before relationships must be maintained2. Send events have greater impact on structureConsider this trace segment from an 8-process run of the pF3D stencilcommunication benchmark [1].First we determine groups ofsimultaneous sends (gray) usingreceives only for ordering.Then we assign the least steppossible to each event.Finally we insert aggregated non-communication events between thesends and receives and determine global steps using partition ordering.Temporal MetricsHaving determined a logical structure, we can calculate how late an eventwas relative to its peers. We define lateness as excess completion timeover the earliest related event at a step.We visualize a portion of an MG [2] trace using traditional methods asrepresented by Vampir [3] (left) and logical structure and lateness (right).In the latter the communication pattern and delay propagation is clear.We classify four situations contributing to event lateness:Using this classification, we can narrow our focus to events wherelateness originates by subtracting out propagated lateness. Thisdifferential lateness allows us to pinpoint sources of delays automatically.Case StudyWe analyze a massively parallel algorithm to compute merge trees. Thealgorithm relies on a global gather-scatter approach where each levelrequires messages sent both up and down a k-ary gather tree:Below are the Vampir (left) and logical structure (right) visualizations of a16 process, 4-ary merge tree calculation. In the logical structure view,lateness reflects data-dependent load imbalance. Logical steps highlightthe gather tree structure, revealing that the gather processes send back tothe leaves before sending up to the root, missing an opportunity for moreaggressive pipelining.The 1024-process, 8-ary tree below shows similar issues. The recurring“panhandle” shape highlights waiting due to sending down before up.References1. C. H. Still et al. Filamentation nd forward brillouin scatter of entire smoothed and aberrated laser beams.Physics of Plasmas, 7(5):2023, 2000.2. D. H. Bailey et al. The nas parallel benchmarks. Int. J. Supercomput. Appl., 5(3):63–73, 1991.3. W. E. Nagel et al. VAMPIR: Visualization and analysis of MPI resources. Supercomputer, 12(1):69-80, 1996.