Salishan 2025 Theme
Exascale: Yesterday, Today, and Tomorrow
The envisioned exascale of yesterday is not the exascale of today, nor will it be the exascale of tomorrow. When we embarked in 2008 on this journey towards achieving exaFLOPS, the anticipated technical challenges included energy and power consumption, on-node memory capacity and bandwidth limitations, massive billion-way thread concurrency, and an environment that would be overly complex and unresilient.
Though today’s exascale systems reflect some of these predicted challenges, they have also been shaped by unforeseen events. FLOPs per watt, or more importantly science per operation, has certainly increased over the past few decades and yet, the power requirements to field leadership-scale systems remain generally out of reach for all but the largest companies or government entities. As compute nodes have become more powerful, the on-node demand for memory capacity and bandwidth of coupled multiphysics simulations has grown, but memory
prices have also increased exponentially. Furthermore, the advent of GPU-accelerated architectures has increased the amount of on-node parallelism that must be exposed to make the most efficient use of that processing power. Fifteen years ago, the HPC community didn’t foresee the explosion of machine learning, large language models, and artificial intelligence, let alone the amalgamation with existing modeling and simulation. We also didn’t predict the rise of the hyperscalers and the cloud, and the resulting demand for software as a service. Although these disruptors have concretely impacted today’s exascale systems, their continued impacts on the systems of tomorrow is expected to be profound.
Looking ahead, we need to maximize the impact of tomorrow’s exascale systems, given their increasing and evolving computational demands. Over the course of this week, we will explore the exascale systems of yesterday, today, and tomorrow. Presentations and discussions will explore application drivers, algorithmic optimizations that can best make use of current and future unique hardware features, the latest scientific accelerations enabled by machine learning and artificial intelligence methods, and operational improvements in the system software and security realms that could improve scientific throughput on these machines. Finally, we will conclude with a discussion of how future system architectures could evolve in the post-exascale era.
The envisioned exascale of yesterday is not the exascale of today, nor will it be the exascale of tomorrow. When we embarked in 2008 on this journey towards achieving exaFLOPS, the anticipated technical challenges included energy and power consumption, on-node memory capacity and bandwidth limitations, massive billion-way thread concurrency, and an environment that would be overly complex and unresilient.
Though today’s exascale systems reflect some of these predicted challenges, they have also been shaped by unforeseen events. FLOPs per watt, or more importantly science per operation, has certainly increased over the past few decades and yet, the power requirements to field leadership-scale systems remain generally out of reach for all but the largest companies or government entities. As compute nodes have become more powerful, the on-node demand for memory capacity and bandwidth of coupled multiphysics simulations has grown, but memory
prices have also increased exponentially. Furthermore, the advent of GPU-accelerated architectures has increased the amount of on-node parallelism that must be exposed to make the most efficient use of that processing power. Fifteen years ago, the HPC community didn’t foresee the explosion of machine learning, large language models, and artificial intelligence, let alone the amalgamation with existing modeling and simulation. We also didn’t predict the rise of the hyperscalers and the cloud, and the resulting demand for software as a service. Although these disruptors have concretely impacted today’s exascale systems, their continued impacts on the systems of tomorrow is expected to be profound.
Looking ahead, we need to maximize the impact of tomorrow’s exascale systems, given their increasing and evolving computational demands. Over the course of this week, we will explore the exascale systems of yesterday, today, and tomorrow. Presentations and discussions will explore application drivers, algorithmic optimizations that can best make use of current and future unique hardware features, the latest scientific accelerations enabled by machine learning and artificial intelligence methods, and operational improvements in the system software and security realms that could improve scientific throughput on these machines. Finally, we will conclude with a discussion of how future system architectures could evolve in the post-exascale era.