The LLNL Near and Long Term Vision for Large-Scale Systems

Lawrence Livermore National Laboratory’s (LLNL’s) supercomputing center, Livermore Computing (LC), is one of the world’s leading large-scale computing sites. This talk will provide a brief overview of LC’s current systems, including details of Sierra, the current number 2 system on the Top 500 list. It will then detail LC’s anticipated near-term systems, including El Capitan, […]

ORNL’s Frontier Exascale Computer

The U.S. Department of Energy has spent the past decade preparing for the challenges of building an exascale computer in the 2021–2022 timeframe. This talk describes Oak Ridge National Laboratory’s road to exascale and the challenges that face exascale computers. It will describe OLCF’s shift two generations ago to accelerated node computing and the ability […]

Argonne’s Aurora Exascale Computer

Argonne National Laboratory is scheduled to take delivery of an Intel/Cray Exascale supercomputer in 2021. This talk describes Argonne’s path through selection and design of the computer, and preparations for early science on the system.

Los Alamos National Laboratory Crossroads Computer

This presentation focuses on the evolution of the Crossroads system design and acquisition over the past several years along with direction of the project in the near-term future. The discussion then focuses on how the design principal of application efficiency ties into the overall architectural design and implantation.

Anticipating the European Supercomputing Infrastructure of the Early 2020s

With the creation of the EuroHPC Joint Undertaking (JU), the development of extreme-scale computing systems in Europe is picking up steam. Five peta-scale and three pre-exascale systems will be deployed in 2020-2021 timeframe, and it is planned that two exa-scale systems will be introduced by 2023. Additionally, several national systems will be deployed in the […]

Perlmutter – A 2020 Pre-exascale GPU- Accelerated System for NERSC

The Perlmutter machine will be delivered to NERSC/LBNL in 2020 and will contain a mixture of CPU-only and GPU-accelerated nodes. Perlmutter will be a pre-exascale machine and contain a mixture of AMD EPYC CPU- only and NVIDIA Tesla GPU-accelerated nodes. In this talk we will describe the architecture of the machine and the analysis we […]

What The FLOP! Meaningful Metrics for Deep Learning (AI) at Scale

Since Summit—the world’s fastest and smartest supercomputer—has come on-line there has been a massive push for large scale AI that can leverage Summit’s unique capabilities. One of the challenges for AI researchers who are relatively new to HPC is how to communicate their research success with the broader HPC community whose stock-in-trade is FLOPs (floating […]

Bridging the Gap Between Deep Learning Algorithms and Systems

Deep Learning algorithms have enjoyed tremendous success in domains such as Computer Vision, and Natural Language Processing. The primary reasons for this success are the availability of large labeled data (big data), and the compute capacity provided by the large-scale HPC systems (big compute). Yet, there is a widening gap between the requirements of a […]