NVIDIA unveils Grace Hopper GH200 AI processor to strengthen its leadership in the AI market

NVIDIA today unveiled Grace Hopper, an advanced artificial intelligence processor designed to extend the company’s leadership position in the rapidly growing AI market. The announcement was made at the Siggraph conference in Los Angeles, writes Bloomberg.

The superchip, called the GH200, combines a graphics accelerator and processor with a new type of memory – High-Bandwidth Memory 3 (HBM3e) – that can access data at an impressive 5 terabytes per second. NVIDIA has scheduled production of the GH200 for the second quarter of 2024. This chip is part of a broader set of hardware and software innovations unveiled at the event, where NVIDIA CEO Jensen Huang was a featured speaker.

NVIDIA’s achievements in AI accelerators, chips that specialize in computing for artificial intelligence software development, have contributed significantly to the company’s valuation, which surpassed $1 trillion this year. This makes NVIDIA the world’s leading chip maker by value. The latest GH200 processor is a clear indication of the company’s intention to maintain its leadership and make it difficult for competitors such as AMD and Intel to close the gap.

AMD plans to release two versions of the MI300 architecture by the end of the year. One will be a graphics chip and the other will be a combined platform similar to NVIDIA’s superchip. AMD components will be compatible with another HBM3 memory variant.

Huang sees NVIDIA technology as a replacement for traditional data center hardware. He pointed out that an $8 million investment in new NVIDIA technology can replace $100 million worth of computing power on old hardware, resulting in a 20-fold reduction in energy consumption.

NVIDIA stock has more than tripled in value this year, reaching a valuation of approximately $1.1 trillion. This growth was the largest of any company listed on the Philadelphia Stock Exchange’s Semiconductor Index.

The GH200 superchip is designed to be the core of a new server computer design that can quickly process and access massive amounts of data. This is critical given the vast amounts of information processed by artificial intelligence models. The chip’s ability to load and update the entire AI model at once, without resorting to slower forms of memory, improves energy efficiency and speeds up the entire process.

NVIDIA’s latest offerings aim to expand the reach of generative AI across industries by simplifying the technology. The new AI Enterprise software simplifies the training of AI models, enabling them to generate text, images, and video based on simple instructions.

The company also announced a partnership with Hugging Face, a prominent developer of AI models and datasets. Hugging Face will integrate a training service on its platform that leverages the NVIDIA DGX Cloud, allowing users to leverage NVIDIA servers for their computing needs.

In addition, NVIDIA is integrating generative artificial intelligence into its Omniverse platform, which is designed for metaverse-style virtual environments. This technology helps enterprise customers create digital copies of real-world objects such as factories and vehicles.

To encourage wider adoption of its technology, NVIDIA has endorsed a standard called Universal Scene Description, originally developed by Walt Disney-owned Pixar. NVIDIA has partnered with Pixar, Autodesk, Adobe, and Apple to accelerate its adoption.

On the product front, NVIDIA released three new RTX workstation graphics cards. The $4,000 RTX 5000 is available now and promises to more than double the speed of generative AI and image rendering. The company also introduced new servers based on the L40S graphics chip and a high-end workstation design that uses four RTX 6000 graphics cards.