On-the-fly clustering for exascale molecular dynamics simulations. - presented by Dr Alizée Dubois and Thierry Carrard

On-the-fly clustering for exascale molecular dynamics simulations.

Alizée Dubois and Thierry Carrard

ADThierry Carrard
Slide at 20:33
DOMAIN STITCHING : CONNECTING COMPONENTS ACROSS BOUNDARIES
MPI m
MPI n
MPI all to one
message of MPI m
message of MPI n
MPI n
MPI one to all
ghost layer
message of MPI n
MPI m
MPI n
ghost layer
MPI m
message of MPI m
A. DUBOIS - T. CARRARD - COMPUTER PHYSICS COMMUNICATIONS SEMINAR SERIES - 03/03/25
II Votre écran est partagé par le biais de l'application app.zoom.us.
Arrêter le partage
Masquer
Share slide
Summary (AI generated)

The connection between the 36 labels and the second label is established through the graph created by the MPI. Each MPI will perform local connected component labeling (CCL) at the boundaries of its regions, utilizing the concept of ghost layers. A ghost layer is the first layer of voxels outside the MPI domain, which allows for local 2D CCL to be executed between these layers.

For instance, at the boundary of MPI MIN, it will send a message to the master MPI that includes the identifiers of the connected cells both inside and outside its spatial zone. If the label 36 is connected to label 25, MPI MIN will report this connection as "36-25." It also recognizes that label 25 is connected to label 3, thus reporting the root of the connected component. This process is repeated for other labels, such as 39 reporting "39-25" and then "25-43."

Subsequently, the master MPI consolidates this information from all MPIs, reducing it to a single message that reflects the global connection. This allows for the local graphs of each MPI to be summarized effectively.

In summary, the algorithm involves multiple steps where all workers are engaged in processing. The left side of the slide outlines these steps, while the right side presents a graph indicating the number of active workers and their respective timeframes. Initially, all workers are operational; however, a bottleneck may occur during the information reduction phase at the boundaries, potentially leading to traffic congestion in the MPI system. It is crucial to assess whether this presents a problem for the scalability of our algorithm.