Edgascale Computing? Why Exascale Needs an Edge

Supercomputing used to be simpler. Input files for extreme-scale jobs evolved slowly, and filesystems were sluggish. Those days are gone. Today’s extreme-scale platforms are rushing toward GPUs and heterogeneous architectures, reduced-precision arithmetic, and data-driven machine learning. Furthermore, instead of a handful of large computing centers linked with high-speed networking, we have nearly ubiquitous fast networking; and urban areas are rolling out 5G. In our new reality, the number of network-connected devices — sensors, actuators, instruments, computers, and data stores — now substantially exceeds the number of humans on the Earth. Billions of things that sense, think, and act are connected to a planet-spanning network of cloud and high-performance computing (HPC) centers that contain more computers than the entire Internet did just a few years ago. This exciting new landscape is transforming science and extreme-scale computation. Parallel computation and advanced architectures optimized for machine learning are being pushed to the edge, where massive data streams can be analyzed and reduced in situ, before moving to HPC systems.

Likewise, researchers are using simulation and modeling to predict how instruments and sensors should be configured and data reduced in order to observe the key phenomena and collect the highest-value data. Edge and exascale are now linked into a new computing continuum that involves analyzing data in situ and using high performance computing to model, predict, and learn. This presentation will explore current successes in edge computing, its enormous potential to revolutionize for the future, and the challenges being raised by rapidly emerging edge technology.

Location: Cumberland Amphitheatre Date: August 29, 2019 Time: 8:45 am - 9:15 am