NVIDIA’s New Hopper Architecture with Breakthrough Features Will Power Next-Generation AI Data Centers

0

More at [email protected] 2022, CEO Jensen Huang has finally unveiled the successor to the Ampere architecture. Please welcome, Hopper.

With all the benefits that can be generated by AI computing, it is inevitable that the data center industry will shift to large-scale infrastructure powered by machine learning to meet modern digital data processing. Therefore, the Hopper architecture provided by the very first product, the NVIDIA H100 GPU, is expected to achieve some of the greatest achievements the world has yet to see.

For starters, the actual physical level is already off the charts with 80 billion transistors built into each full GPU thanks to TSMC’s 4nm process which is capable of delivering nearly 5 Tops external communication speed with extended support of PCIe Gen 5. Not only that, but it’s also the first to use HBM3 memory with 3TB/s bandwidth and for scale, deploying just 20 units of the H100 GPU is enough to support the all of the world’s Internet traffic, which is incredibly efficient. With raw data displacing technology aside, the H100 accelerator’s new dedicated Transformer engine will be the frontrunner in pushing the capabilities of natural language processing algorithms by accelerating these networks up to 6x faster than offerings from the previous generation.

In terms of virtualization, this new GPU can also do a much better job as it can now be split into 7 equal fully isolated instances to handle completely different tasks at once thanks to the secure 2nd generation multi-instance GPU that does the job. of partitioning. skillfully. Even though the world is concerned about cybersecurity, the H100 will follow while revolutionizing the industry by protecting AI models in addition to data so that the actual powerful software behind the calculations is not easily exposed. Additionally, due to the nature of the H100 created for large-scale deployment, the introduction of 4th Gen NVLink is also part of the equation via the NVIDIA HDR Quantum InfiniBand connecting up to 256 units of said GPU with 9 times more bandwidth.

NVIDIA DGX H100 Systems

Let’s take a look at another aspect of operating the H100 via the new DPX instruction set used to accelerate dynamic programming which includes a wide range of algorithms for things like route optimization for autonomous robot fleets and genomics-exclusive Smith-Waterman, all of which are beneficiaries of NVIDIA’s AI inference powered by the Hopper architecture. Optimized for industry deployment, the 4th generation DGX system, the DGX H100, will feature 8 GPUs for an astounding 32 petaflops of AI performance at the new FP8 precision where many cloud service providers like Alibaba Cloud, Amazon Web Services, Baidu AI Cloud, Google Cloud, Microsoft Azure, Oracle Cloud, and Tencent Cloud are all ready to offer H100-based instances.

With all that said, we’ll just be ourselves and wait for the release of the RTX 40 series sporting the rumored “Ada Lovelace” GPU, or if you’re on a budget, grab an Ampere card quickly when there’s still time.


Source link

Share.

Comments are closed.