Tips on Choosing InfiniBand Products for HPC Computing

Posted on Dec 30, 2023 by

During Computex 2023, NVIDIA unveiled a diverse array of cutting-edge products, encompassing advanced chips, supercomputing architectures, and sophisticated switches. Of particular note is the formidable supercomputer, NVIDIA Helios. Set to harness the Quantum-2 InfiniBand networks, it will intricately interconnect four DGX GH200 systems, significantly enhancing the efficiency of large-scale model training.

Indicators abound, suggesting a decisive shift in data centers towards accelerated computing, a momentum propelled by AIGC. In response to the escalating demands of high-performance computing (HPC) and expansive infrastructure, there is a palpable need for expedited interconnectivity and more intelligent network solutions. In this landscape, InfiniBand products have emerged as the focal point of industry attention, meticulously addressing these imperative requirements.

Basics of InfiniBand

InfiniBand is a high-speed, low-latency interconnect technology primarily used in data centers and high-performance computing (HPC) environments. It provides a high-performance fabric for connecting servers, storage devices, and other network resources within a cluster or data center. The emergence of InfiniBand technology is closely tied to the significant network latency and additional operating system overhead associated with traditional TCP/IP protocols.

Basics of InfiniBand-1

The traditional TCP protocol stands as a widely adopted transmission protocol, finding its application in an array of devices ranging from everyday appliances like refrigerators to sophisticated supercomputers. However, its adoption comes at a substantial cost: TCP is intricate, characterized by an extensive codebase, and numerous exceptions, and proves arduous to uninstall.

Basics of InfiniBand-2

In contrast, IB employs a trust-based flow control mechanism, ensuring connection integrity and minimizing data packet loss. Under IB, data transmission occurs only when the receiving buffer has ample space. Following the completion of data transmission, the receiving party signals the availability of buffer space, eliminating the retransmission delay associated with initial data packet loss. This approach significantly boosts efficiency and overall performance.

InfiniBand in the Market

InfiniBand technology is developed under the supervision of the InfiniBand Trade Association (IBTA), which is specifically responsible for maintaining and promoting InfiniBand standards. Additionally, the IBTA ensures compliance and conducts interoperability testing for commercial InfiniBand products. Among the nine main directors of the InfiniBand Trade Association, only two companies—Mellanox and Emulex—are dedicated to InfiniBand. Emulex, due to poor business performance, was acquired by Avago in 2015. Currently, Mellanox dominates the InfiniBand market, with the number of cluster deployments using its products far surpassing those of its competitors.

Key Advantage of InfiniBand

Overall, InfiniBand technology features the following advantages:

  • High speed and scalability

  • Low latency

  • Low power consumption

More information about InfiniBand, please refer to Getting to Know About InfiniBand.

InfiniBand in HPC Networking

In the field of High-Performance Computing (HPC), high-speed interconnect networks (HSI) play a crucial role in system performance and efficiency. Among these, InfiniBand technology has become a key component widely employed in HPC, thanks to its outstanding performance. As one of the fastest-growing HSI technologies, InfiniBand offers bandwidth of up to 200Gbps and point-to-point latency of less than 0.6 microseconds, providing robust support for the construction of high-performance computing clusters.

With the high-speed networking capabilities of InfiniBand, HPC systems can effectively combine multiple servers, achieving linear performance scalability. This technology plays a pivotal role in the development of high-performance computing clusters, particularly in the construction of supercomputers. Enterprises, as well as large or super-sized data centers, benefit significantly from its high reliability, availability, scalability, and superior performance. Therefore, the importance of InfiniBand technology in the HPC domain is not only reflected in enhancing the performance of computing clusters but also in providing critical support for data centers of varying scales, driving the overall development of the HPC ecosystem.

InfiniBand Product Sellers in the Market

Mellanox Technologies (Acquired by NVIDIA® Networking)

Mellanox, a leading player in the InfiniBand (IB) field, was acquired by NVIDIA in April 2020. The official platform for purchasing Mellanox products is the NVIDIA Networking Store. This store is efficient and reliable, offering a wide range of connectors. However, some products may not be directly available on the official website. In cases where products are not available on the official site, customers have the option to purchase from NVIDIA partners.

NVIDIA® Partner Network

NVIDIA's partners are key providers of the latest market solutions and supplies, including IB cables and transceivers. IB cables and transceivers are distributed globally through the NVIDIA authorized distributors/dealers network. Information about distributors/dealers can be found on the official NVIDIA website. Due to close collaboration between distributors/partners and NVIDIA, challenges like connector shortages, insufficient market supply, and long delivery cycles may arise.


FS is an elite partner of NVIDIA® and offers a rich variety of InfiniBand products on its official website, including NVIDIA® InfiniBand Switches, InfiniBand modules, InfiniBand cables, and NVIDIA® InfiniBand Adapters. FS website maintains an ample stock of InfiniBand products and ensures swift delivery. If you wish to purchase InfiniBand products or obtain InfiniBand solutions, you can contact FS for assistance.

Tips for Choosing InfiniBand Product

InfiniBand products play a crucial role in high-performance computing data centers, and selecting the right products is paramount for operational success. The comprehensive InfiniBand system includes InfiniBand Switches, InfiniBand Adapters, InfiniBand LongHaul, InfiniBand Gateway to Ethernet, InfiniBand Cables and Transceivers, InfiniBand Telemetry and Software Management, and InfiniBand Acceleration Software.

Choosing the appropriate InfiniBand products is critical for high-performance computing data centers. Considerations include bandwidth and distance requirements, connectors, budget, compatibility, reliability, and future needs, all of which contribute to selecting the suitable IB connector.

Tips for Choosing InfiniBand Product

Concerning InfiniBand network interconnect products:

  • DAC high-speed copper cables offer an economical solution for short-range, high-speed interconnects.

  • AOC active cables utilize optical technology for longer-distance data transmission.

  • Optical modules are commonly used for long-distance, high-speed interconnects.

Understanding different product categories, speed rates, and package modules aids in making informed decisions, while choosing the right supplier ensures the receipt of high-quality InfiniBand products that align with performance and budget requirements.

InfiniBand or Ethernet: InfiniBand is Better for HPC Computing

InfiniBand or Ethernet: InfiniBand is Better for HPC Computing

Some users still have questions about whether to use InfiniBand or Ethernet in HPC computing power. Actually, for high-performance computing, InfiniBand is more suitable.

Detailed discussions on the specific differences between InfiniBand and Ethernet can be found in the FS community article InfiniBand vs. Ethernet: What Are They? For a better understanding of the distinctions between the two, you can refer to this article.

In the realm of high-performance computing (HPC), InfiniBand exhibits advantages over Ethernet in several key aspects:

Flow Control Mechanism

InfiniBand employs end-to-end flow control, ensuring messages are not congested during transmission, thereby achieving a lossless network. In contrast, Ethernet's flow control mechanism is relatively simpler, potentially leading to congestion and data loss.

Network Topology Advantage

InfiniBand introduces a subnet manager in its Layer 2 network, capable of configuring the local ID of nodes and calculating/distributing forwarding path information through the control plane. This facilitates the deployment of large-scale networks with ease, avoiding flooding, VLAN, or loop-breaking issues. This imparts a unique advantage to InfiniBand over Ethernet.

Performance Parameters

InfiniBand provides higher bandwidth, lower latency, and less jitter, making it an ideal choice for fast and reliable data transmission in HPC environments. Compared to Ethernet, InfiniBand boasts faster data transfer rates ranging from 40G to 400G, whereas Ethernet is currently limited to 100G.

Suitability for GPU Workloads

InfiniBand is better suited for handling GPU workloads, enabling high-speed data transfer between CPUs and GPUs. This is particularly crucial for tasks demanding substantial computational power, where Ethernet may exhibit relative weaknesses.

Support for Parallel Computing

InfiniBand allows multiple processors to communicate simultaneously, showcasing superior performance in parallel computing. This is essential for applications requiring extensive parallel computational capabilities.

Global HPC TOP500 Rankings

According to the latest global HPC TOP500 rankings, InfiniBand's market share has been steadily increasing, currently dominating the TOP100, while Ethernet's market share is on the decline.

Global HPC TOP500 Rankings


Currently, we are in a flourishing era of Artificial Intelligence and General Computing (AIGC). Major platform giants like OpenAI, Microsoft, and Google, as well as application-focused companies such as Midjjourney, are accelerating the development and evolution of HPC applications and services. Additionally, the rapid emergence of new companies and applications is creating a highly competitive atmosphere in the field of artificial intelligence.

It is evident that computing power plays a crucial role in determining productivity. Presently, there is a notable scarcity of NVIDIA IB products. To meet the demands of your business, it is essential to choose the right supplier and InfiniBand products

You might be interested in

See profile for Howard.
InfiniBand vs. Ethernet: What Are They?
Mar 1, 2023
See profile for Howard.
InfiniBand: Empowering HPC Networking
Dec 21, 2023
See profile for Sheldon.
Decoding OLT, ONU, ONT, and ODN in PON Network
Mar 14, 2023
See profile for Irving.
What's the Difference? Hub vs Switch vs Router
Dec 17, 2021
See profile for Sheldon.
What Is SFP Port of Gigabit Switch?
Jan 6, 2023