April 22-25, 2024
Theme: HPC in the Future: "Business-as-Usual Will Not Be Adequate"
The HPC community has grown accustomed to climbing the proverbial mountain that lays right before us. We marched in lockstep towards the Terascale Peak, shifting from largely custom-designed chip architectures to a strategy that leveraged the commodity market as computing became commonplace throughout industry and even in many households. Together, we climbed Mounts Petascale and Exascale by adopting multi-core and GPU architectures that allowed us to field more computationally capable systems within seemingly reasonable power budgets thanks to Dennard scaling.
But now, as we stand at the peak of Exascale, we find ourselves looking out over a precipice. In the distance, Zettascale is obscured by the Clouds. The commodity chip market that we have leveraged for the last twenty-plus years has pivoted nearly overnight to follow the demand for architectures suitable for artificial intelligence and machine learning. Yet the questions that can only be answered through modeling and simulation become more complex year after year.
It’s clear that “Business-as-usual will not be adequate”, according to the recent National Academies report. The report concludes that we need innovations in computer systems designs, acquisitions and deployments to meet the future needs of the scientific computing community. We need to foster cutting-edge research in mathematics, computer and data science, and computational science to take advantage of expected disruptions in computing technologies. And we need to grow and develop the next-generation workforce that can address the growing complexities that modern modeling and simulation using advanced computing technologies requires.
Over the course of the week, we will focus on capabilities that exist today, and whether and how the HPC community can best leverage those capabilities. This includes both the growing cloud resources as well as chip and machine architectures increasingly designed for the machine learning and artificial intelligence communities. We will also focus on capabilities that must be further developed to ensure that our community continues to address the challenging scientific questions. These include understanding when and how AI technologies can be employed within scientific applications to supplant humans in the loop, architectures in the future that may better address the computing challenges in scientific computing, and the broader software engineering skillset required to field complex multiphysics applications.
But now, as we stand at the peak of Exascale, we find ourselves looking out over a precipice. In the distance, Zettascale is obscured by the Clouds. The commodity chip market that we have leveraged for the last twenty-plus years has pivoted nearly overnight to follow the demand for architectures suitable for artificial intelligence and machine learning. Yet the questions that can only be answered through modeling and simulation become more complex year after year.
It’s clear that “Business-as-usual will not be adequate”, according to the recent National Academies report. The report concludes that we need innovations in computer systems designs, acquisitions and deployments to meet the future needs of the scientific computing community. We need to foster cutting-edge research in mathematics, computer and data science, and computational science to take advantage of expected disruptions in computing technologies. And we need to grow and develop the next-generation workforce that can address the growing complexities that modern modeling and simulation using advanced computing technologies requires.
Over the course of the week, we will focus on capabilities that exist today, and whether and how the HPC community can best leverage those capabilities. This includes both the growing cloud resources as well as chip and machine architectures increasingly designed for the machine learning and artificial intelligence communities. We will also focus on capabilities that must be further developed to ensure that our community continues to address the challenging scientific questions. These include understanding when and how AI technologies can be employed within scientific applications to supplant humans in the loop, architectures in the future that may better address the computing challenges in scientific computing, and the broader software engineering skillset required to field complex multiphysics applications.