There is a newer version of the record available.

Published May 29, 2022 | Version v2.0-isc22
Lesson Open

Efficient Distributed GPU Programming for Exascale

  • 1. Barcelona Supercomputing Center
  • 2. Jülich Supercomputing Centre
  • 3. NVIDIA
  • 4. FernUni Hagen

Description

Over the past years, GPUs became ubiquitous in HPC installations around the world. Today, they provide the majority of performance of some of the largest supercomputers (e.g. Summit, Sierra, JUWELS Booster). This trend continues in the pre-exascale and exascale systems (LUMI, Leonardo; Perlmutter, Frontier): GPUs are chosen as the core computing devices to enter this next era of HPC.

To take advantage of future GPU-accelerated systems with tens of thousands of devices, application developers need to have the propers skills and tools to understand, manage, and optimize distributed GPU applications. In this tutorial, participants will learn techniques to efficiently program large-scale multi-GPU systems. While programming multiple GPUs with MPI is explained in detail, advanced tuning techniques and complementary programming models like NCCL and NVSHMEM are presented as well. Tools for analysis are shown and used to motivate and implement performance optimizations. The tutorial is a combination of lectures and hands-on exercises, using Europe's fastest supercomputer, JUWELS Booster with NVIDIA GPUs, for interactive learning and discovery.

Notes

Slides and exercises of tutorial presented virtually at ISC22 (ISC High Performance 2022); https://app.swapcard.com/widget/event/isc-high-performance-2022/planning/UGxhbm5pbmdfODYxMTQ2

Files

FZJ-JSC/tutorial-multi-gpu-v2.0-isc22.zip

Files (35.2 MB)

Name Size Download all
md5:378918ec69e7d079cb7653f86fec771e
35.2 MB Preview Download

Additional details