Cray, an HPE company, held a panel discussion webinar on October 18 to discuss Exascale (10^18, get it?) supercomputing. This is definitely not in my area of expertise, but it is certainly interesting.

Following is information I gleaned from links they sent to me. Basically, it is Why Supercomputing. And not only computers, but also networking to support them.

Today’s science, technology, and big data questions are bigger, more complex, and more urgent than ever. Answering them demands an entirely new approach to computing. Meet the next era of supercomputing. Code-named Shasta, this system is our most significant technology advancement in decades. With it, we’re introducing revolutionary capabilities for revolutionary questions. Shasta is the next era of supercomputing for your next era of science, discovery, and achievement.

WHY SUPERCOMPUTING IS CHANGING

The kinds of questions being asked today have created a sea-change in supercomputing. Increasingly, high-performance computing systems need to be able to handle massive converged modeling, simulation, AI, and analytics workloads.

With these needs driving science and technology, the next generation of supercomputing will be characterized by exascale performance, data-centric workloads and diversification of processor architectures.

SUPERCOMPUTING REDESIGNED

Shasta is that entirely new design. We’ve created it from the ground up to address today’s diversifying needs.

Built to be data-centric, it runs diverse workloads all at the same time. Hardware and software innovations tackle system bottlenecks, manageability, and job completion issues that emerge or grow when core counts increase, compute node architectures proliferate, and workflows expand to incorporate AI at scale.

It eliminates the distinction between clusters and supercomputers with a single new system architecture, enabling a choice of computational infrastructure without tradeoffs. And it allows for mixing and matching multiple processor and accelerator architectures with support for our
new Cray-designed and developed interconnect we call Slingshot.

EXASCALE-ERA NETWORKING

Slingshot is our new high-speed, purpose-built supercomputing interconnect. It’s our eighth generation of scalable HPC network. In earlier Cray designs, we pioneered the use of adaptive routing, pioneered the design of high-radix switch architectures, and invented a new low-diameter system topology, the dragonfly.

Slingshot breaks new ground again. It features Ethernet capability, advanced adaptive routing, first-of-a-kind congestion control, and sophisticated quality-of-service capabilities. Support for both IP-routed and remote memory operations broadens the range of applications beyond traditional modeling and simulation.

Quality-of-service and novel congestion management features limit the impact to critical workloads from other applications, system services, I/O traffic, or co-tenant workloads. Reduction in the network diameter from five hops (in the current Cray XCTM generation) to three reduces cost, latency, and power while improving sustained bandwidth and reliability.

FLEXIBILITY AND TCO

As your workloads rapidly evolve, the ability to choose your architecture becomes critical. With Shasta, you can incorporate any silicon processing choice — or a heterogenous mix — with a single management and application development infrastructure. Flex from single to multi-socket nodes, GPUs, FPGAs, and other processing options that may emerge, such as AI-specialized accelerators.

Designed for a decade or more of work, Shasta also eliminates the need for frequent, expensive upgrades, giving you exceptionally low total
cost of ownership. With its software architecture you can deploy a workflow and management environment in a single system, regardless of packaging.

Shasta packaging comes in two options: a 19” air- or liquid-cooled, standard datacenter rack and a high-density, liquid-cooled rack designed to take 64 compute blades with multiple processors per blade.

Additionally, Shasta supports processors well over 500 watts, eliminating the need to do forklift upgrades of system infrastructure to accommodate higher-power processors.

Share This

Follow this blog

Get a weekly email of all new posts.