Challenges Moving Toward High Performance Machine Learning

Machine learning has in recent years gone from being data-intensive to being (also) computation-intensive. This means that performance is increasingly important. However, many of the use cases and demands of machine learning are quite different than those of traditional scientific computing, and understanding these differences is important for understanding how ideas from scientific computing and high performance computing can impact machine learning. We’ll provide an overview of these issues, including Alchemist (a system for interfacing between Spark and existing MPI libraries that is designed to address the performance gap between the two approaches) and recent results in large-scale neural network training. The latter includes novel perspectives on using second order optimization methods, non-obvious challenges in large-batch training of large neural networks and related hardware considerations, as well as promising future directions.

Location: Cumberland Amphitheatre Date: August 28, 2019 Time: 1:45 pm - 2:15 pm Michael Mahoney (UC Berkeley)