NVIDIA announces H200 AI accelerator and Jupiter supercomputer

At SC23, NVIDIA announced the H200 accelerator with HBM3E memory and the Jupiter supercomputer, which are scheduled for release in 2024, reports AnandTech. The H200, an updated version of the H100 accelerator, will feature faster and larger HBM3E memory, which will improve its performance in memory-constrained workloads. This upgrade is especially useful for generative AI applications, as the largest language models can fully utilize the H100’s 80 GB of memory.

The H200 is based on the same GH100 GPU as the original H100, but includes a 6th HBM memory stack that was previously disabled in the H100. This change increases the H200’s memory capacity from 80 GB to 141 GB and boosts the throughput from 3.35 TB/s to an expected 4.8 TB/s. The H200 memory will operate at approximately 6.5 Gbps, a 25% increase over the H100’s 5.3 Gbps HBM3 memory.

NVIDIA also announced the HGX H200 platform, an updated version of the HGX H100 with a new accelerator. This platform serves as the foundation of the NVIDIA H100/H200 family, allowing OEMs to customize high-performance servers. The HGX 200 boards are cross-compatible with the H100 boards, providing a seamless transition for server builders.

In addition, NVIDIA introduced the Quad GH200, a 4-node Grace Hopper GH200 board designed as a building block for larger systems. Each Quad GH200 node offers 288 Arm processor cores and 2.3 TB of high-speed memory.

The Jupiter supercomputer, under contract with Eviden and ParTec, will be built with 23,762 GH200 nodes and is expected to be the largest supercomputer based on Hopper technology. It will offer 93 EFLOPS of low-precision performance for AI and more than 1 EFLOPS of high-precision performance for traditional HPC workloads. Jupiter will consume 18.2 megawatts of power and is scheduled to be installed in 2024 at the Jülich Research Center in Germany.