At SC23—one of the premier global events for high-performance computing (HPC)—Intel took center stage by highlighting its newest technologies and partnerships to accelerate HPC performance and artificial intelligence (AI) workloads. Intel showcased its progress in expediting scientific research with global supercomputers through various initiatives and collaborations. These updates demonstrated Intel's continued dedication to empowering innovation in the HPC and AI community, where the new wave of supercomputers is delivering unprecedented results in scientific research and beyond.
Some key announcements included performance updates for HPC and AI workloads across Intel Data Center GPU Max Series, Intel Gaudi 2 AI accelerators, Intel Xeon processors, and its work on the Aurora supercomputer at the Argonne National Laboratory.
Intel Data Center GPU Max Series
The Intel Data Center GPU Max Series is Intel's most high-density and high-performance general-purpose discrete GPU, with over 100 compute units and high memory bandwidth to support various applications. It is built upon the Xe HPC micro-architecture and designed to handle highly parallelized computing models associated with AI and HPC performance. This also makes it an excellent fit for scientific research and applications that demand massive computational power.
Intel made several announcements about its Data Center GPU Max Series at SC23, including:
- The Max Series 1550 outperforms the Nvidia H100 PCIe card by an average of 36% on diverse HPC workloads
- It delivers improved support for AI models, including large language models (LLMs) such as GPT-J and LLAMA2
- Four Max 1550s had 26% higher warm Greeks 10-100k-1260 performance and 4.3X higher space efficiency than eight Nvidia H100 PCIe GPUs.
These technical capabilities and specifications highlight how the Intel Data Center GPU Max Series delivers extraordinary performance for HPC and AI workloads.
Intel Gaudi 2 AI Accelerators
The Intel Gaudi 2 AI accelerator is a high-performance, deep-learning inference processor. Architecturally, it features 7nm process technology, heterogeneous compute, 24 tensor processor cores, and dual matrix multiplication engines, and it is equipped with 96 GB HBM2E memory on board and 48 MB SRAM. In terms of scalability, it supports substantial, flexible scale-out with 24 100 Gigabit Ethernet ports integrated into every accelerator. Gaudi2 has demonstrated a 2X performance leap on the v3.1 training GPT-3 benchmark over the previous generation.
At SC23, Intel also shared more about its Gaudi3 AI Accelerator, which is planned to be released in 2024. Gaudi3 is expected to offer 4X the performance of its predecessor when processing bfloat16 data—the custom, floating point format used for machine learning—with twice the networking capacity and 1.5 times more onboard memory for storing AI models.
Intel Xeon Scalable Processors
Intel also had much to share at SC23 about its Intel Xeon Scalable Processors, which continue to evolve to meet the demands of diverse workloads through enhanced performance per core and unrivaled AI performance. It showcased its Intel CPU Max Series — the only x86 processor with high bandwidth memory—and its ability to deliver an average of 19% more performance than the AMD Epyc Genoa processor.
Intel gave glimpses into its future-generation Xeon processors as well. Its 5th Gen Xeon Scalable "Emerald Rapids" processors are set to offer higher performance and performance-per-watt than the previous generation. Intel also provided performance projections for its 2024 "Granite Rapids" Xeon processor release, offering more cores, higher memory bandwidth, and enhanced AI acceleration. With Granite Rapids, a 2-3X improvement in AI workloads, a 2.8X boost in memory throughput, and a 2.9X improvement in the DeepMD+LAMMPS AI inference workload are expected compared with its predecessor.
Advancing Scientific Research with the Aurora Supercomputer
Intel is committed to advancing scientific research and performance in supercomputing, particularly through its developments in HPC and AI technologies. In partnership with Argonne National Laboratory, Intel uses the Max Series GPU in the Aurora supercomputer, which can perform billions of computations per second. The Max Series GPU's unique architecture and the Aurora supercomputer's system capabilities enable it to run a 1 trillion parameter GPT-3 LLM, a foundational AI model for science.
Bring your HPC solution to market with Intel and UNICOM Engineering
As an Intel Titanium Level OEM partner, UNICOM Engineering has deep expertise in bringing innovative HPC and AI solutions to market equipped with the latest technology for breakthrough performance. Our experienced team is ready to assist with designing your solution and ensuring it is deployed on the optimal hardware to meet your needs.
To learn more about how we can help you bring your HPC or AI solution to market, visit our website to schedule a consultation.