NVIDIA H100 up close: Hopper and 80 GB of memory in the palm of your hand

NVIDIA H100 up close: Hopper and 80 GB of memory in the palm of your hand

So far we had only seen renderings, but now we can admire NVIDIA H100 and its GH100 GPU in real close-up photos. Patrick Kennedy’s site ServeTheHome had the opportunity to hold in his hands – or rather, in one hand thanks to the SXM design – the new accelerator from NVIDIA for the world of HPC and AI.

At the heart of the NVIDIA H100 is a custom process GPU 4N at TSMC call Hopper (here all the details): it is a “monster” from 80 billion transistors supported by 80 GB of HBM3 memory on 5120-bit bus on the same package thanks to TSMC’s CoWoS technology.

Read This Now:   how to monitor your hardware with this software

The GPU covers an area of 814 mm2 and inside it has 144 Streaming Multiprocessors for a total of 18432 CUDA FP32 cores, even if in the model you see – for reasons of production yields – the number of active units drops to 16896 (132 SM). Likewise, the 6 HBM3 chips actually offer 96GB of memory which is not totally accessible though. The whole brings the TDP about 700W.

We must not forget that NVIDIA will also offer the new GPU in the traditional “card” format: it will be a solution with a PCI Express 5.0 interface, 14592 CUDA core (114 SM) and 80 GB of HBM2E memory for a TDP of 350W. NVIDIA H100 will officially debut in the third quarter in various forms and solutions, ending for example in the fourth generation of DGX called DGX H100.



Notice: ob_end_flush(): failed to send buffer of zlib output compression (1) in /home/gamefeve/bitcoinminershashrate.com/wp-includes/functions.php on line 5373

Notice: ob_end_flush(): failed to send buffer of zlib output compression (1) in /home/gamefeve/bitcoinminershashrate.com/wp-includes/functions.php on line 5373