English

InfiniBand: Empowering HPC Networking

Posted on Dec 21, 2023 by
1.5k

InfiniBand, a powerful In-Network Computing platform, stands as a pivotal force in transforming high-performance computing (HPC), artificial intelligence (AI), and hyper-scale cloud infrastructures, delivering unparalleled performance. Tailored primarily for server-side connections, InfiniBand plays a crucial role in facilitating communication between servers, as well as connecting storage devices and networks. Embraced and championed by the InfiniBand Industry Association, this technology has emerged as the predominant interconnect solution in the prestigious TOP500 list. Notably, 44.4% of systems utilize InfiniBand for interconnection, surpassing Ethernet technology, which accounts for 40.4% of the systems on the list. Now, let's delve into the distinctions between InfiniBand and Ethernet.

The difference between InfiniBand and Ethernet

What is InfiniBand?

InfiniBand, fostered under the guidance of the InfiniBand Trade Association (IBTA), serves as a standardized communication specification. This specification delineates a switched fabric architecture explicitly crafted to interconnect servers, communication infrastructure equipment, storage solutions, and embedded systems within the expansive landscape of a data center. This emphasis on standardization ensures seamless integration and efficient communication across diverse components in the HPC networking environment.

InfiniBand, recognized for its high bandwidth and low latency, boasts impressive speeds such as FDR at 56Gbps, EDR at 100Gbps, HDR at 200Gbps, and NDR at 400Gbps/800Gbps per second throughput with a 4x link width connection. Excitingly, even faster speeds are on the horizon.

InfiniBand excels in scalability, accommodating tens of thousands of nodes within a single subnet, making it the preferred choice for high-performance computing (HPC) environments. With quality of service (QoS) and failover capabilities, InfiniBand is a key network fabric for the non-volatile memory express over fabrics (NVMe-oF) storage protocol, alongside Ethernet, Fibre Channel (FC), and TCP/IP networks. Opt for InfiniBand in your data center for unmatched performance and scalability.

What is Ethernet?

Ethernet, conceived by Xerox, Intel, and DEC, serves as a LAN specification standard, establishing itself as the most widely adopted communication protocol for Local Area Networks (LANs) with data transmission occurring through cables. Originating in the 1970s, Xerox pioneered Ethernet as a wired communication technology, connecting devices within LANs or Wide Area Networks (WANs). Its versatility extends to linking various devices, from printers to laptops, spanning buildings, residences, and small communities. The user-friendly interface simplifies the creation of LANs using just a router and Ethernet connections, incorporating devices like switches, routers, and PCs.

Despite the prevalence of wireless networks in many locations, Ethernet endures as a prominent choice for wired networking due to its reliability and resistance to interference. Over the years, Ethernet has undergone multiple revisions, consistently adapting and expanding its capabilities. Today, it stands as one of the most extensively utilized network technologies globally.

Presently, the IEEE 802.3 Standard Organization, under the IEEE umbrella, has issued Ethernet interface standards, including 100GE, 200GE, 400GE, and 800GE, reflecting ongoing efforts to enhance and advance Ethernet technology.

InfiniBand vs Ethernet

InfiniBand emerged with the specific intent of alleviating the bottleneck in cluster data transmission within high-performance computing scenarios. Over time, it has evolved into a widely adopted interconnection standard, effectively catering to modern requirements. The key distinctions between InfiniBand and Ethernet manifest in their bandwidth, latency, network reliability, and networking methodologies.

Bandwidth

Concerning bandwidth, InfiniBand has experienced more rapid advancements compared to Ethernet, primarily driven by its utilization in high-performance computing environments and its ability to alleviate CPU load. In contrast, Ethernet is predominantly tailored for terminal device interconnection and doesn't necessitate the same high bandwidth demands as InfiniBand.

Latency

In the realm of network latency, InfiniBand and Ethernet diverge significantly in their processing flows. InfiniBand leverages Cut-Through technology in switches, substantially diminishing forwarding delay to less than 100 ns. In contrast, Ethernet switches undergo longer processing flows, attributed to the intricacies introduced by services like IP, MPLS, and QinQ.

Reliability

In the realm of high-performance computing where network reliability is paramount, InfiniBand takes the lead with its well-defined layer 1 to layer 4 formats, guaranteeing a lossless network through end-to-end flow control. Conversely, Ethernet lacks a scheduling-based flow control mechanism, relying on a larger chip area to temporarily store messages. This approach results in higher costs and increased power consumption for Ethernet.

Networking Methodologies

In terms of network management, InfiniBand proves to be more straightforward than Ethernet, thanks to the integration of the Software-Defined Networking (SDN) concept into its design. In InfiniBand, a subnet manager exists on each layer 2 network, facilitating the configuration of nodes and the calculation of forwarding path information. In contrast, Ethernet necessitates MAC entries, IP, and ARP protocols, adding layers of complexity. Ethernet also relies on regular packet sending for entry updates and employs the Vlan mechanism to segment virtual networks and restrict their scale. This complexity introduces challenges, including the potential for loop networks, requiring additional protocols like STP.

Investigating InfiniBand Offerings

InfiniBand Switches & NICs

In summary, the comparison between InfiniBand and Ethernet underscores the notable advantages of InfiniBand networks. For those considering the implementation of InfiniBand switches in their high-performance data centers, further details are available. The InfiniBand network has undergone rapid iterations, progressing from SDR 10Gbps, DDR 20Gbps, QDR 40Gbps, FDR56Gbps, EDR 100Gbps, and now to HDR 200Gbps and NDR 400Gbps/800Gbps InfiniBand. These advancements are made possible through the integration of RDMA (Remote Direct Memory Access)

traditional-vs-rdma-mode

FS offers NVIDIA Quantum™-2 NDR InfiniBand 400G and NVIDIA Quantum™ HDR InfiniBand 200G data center switches, available in both managed and unmanaged configurations. To meet diverse customer needs, the 400G switches come with service support options of one year, three years, and five years.

FS-nvidia-quantum-infiniband-switches

Types Speed Switch Chip ports Features Manufacturer
MQM9790-NS2F 400G NVIDIA QUANTUM-2 64 x NDR 400G Unmanaged Mellanox
MQM9700-NS2F 400G NVIDIA QUANTUM-2 64 x NDR 400G Managed Mellanox
MQM8790-HS2F 200G NVIDIA QUANTUM 40 x HDR QSFP56 Unmanaged Mellanox
MQM9790-NS2F 200G NVIDIA QUANTUM 40 x HDR QSFP56 Managed Mellanox

InfiniBand Modules

Product Application Connector
40G Transceiver InfiniBand FDR10 MTP/MPO-12
100G Transceiver InfiniBand EDR MTP/MPO-12
200G Transceiver InfiniBand HDR MTP/MPO-12
400G Transceiver InfiniBand NDR MTP/MPO-12 APC
800G Transceiver InfiniBand NDR Dual MTP/MPO-12 APC

InfiniBand DAC

Product Application Connector Length
40G DAC Cable InfiniBand FDR10 QSFP+ to QSFP+ 0.5m, 1m, 1.5m, 2m, 3m, 4m, 5m
56G DAC Cable InfiniBand FDR QSFP+ to QSFP+ 0.5m, 1m, 1.5m, 2m, 3m, 4m, 5m
100G DAC Cable InfiniBand EDR QSFP28 to QSFP28 0.5m, 1m, 1.5m, 2m, 3m
200G DAC Cable InfiniBand HDR QSFP56 to QSFP56;QSFP56 to 2 QSFP56 0.5m, 1m, 1.5m, 2m, 3m,5m, 7m
400G DAC Cable InfiniBand HDR OSFP to 2x QSFP56 1m, 1.5m, 2m
800G DAC Cable InfiniBand NDR OSFP to OSFP;OSFP to 2× OSFP;OSFP to 4× OSFP 0.5m, 1m, 1.5m, 2m

InfiniBand AOC

Product Application Connector Length
40G AOC Cable InfiniBand FDR10 QSFP+ to QSFP+ 1m, 3m,5m, 7m, 10m, 15m, 20m, 25m, 30m, 50m, 100m
56G AOC Cable InfiniBand FDR QSFP+ to QSFP+ 1m, 2m, 3m, 5m, 10m, 15m, 20m, 25m, 30m, 50m, 100m
100G AOC Cable InfiniBand EDR QSFP28 to QSFP28 1m, 2m, 3m, 5m, 7m, 10m, 15m, 20m, 30m, 50m, 100m
200G AOC Cable InfiniBand HDR QSFP56 to QSFP56;QSFP56 to 2 QSFP56; 2x QSFP56 to 2x QSFP56 1m, 2m, 3m, 5m, 10m, 15m, 20m, 30m, 50m, 100m
400G AOC Cable InfiniBand HDR OSFP to 2x QSFP56 3m, 5m, 10m, 15m, 20m, 30m

InfiniBand Standards Overview

InfiniBand NDR (Next-Generation Data Rate):

The InfiniBand NDR category includes InfiniBand 400Gbase/800Gbase transceivers and DAC, designed to be compatible with Mellanox NDR 400Gb switches such as the MQM9700/MQM9790 series. These components enable high-performance connections in GPU-accelerated computing, potentially saving up to 50% in costs. They are well-suited for High-Performance Computing (HPC), cloud computing, model rendering, and storage in InfiniBand 400Gb/800Gb networks.

InfiniBand HDR (High Data Rate):

FS's InfiniBand HDR connectivity product line provides a range of solutions, including 200Gb/s and 400Gb/s QSFP56 IB HDR MMF active/passive optical cable (AOC), active/passive direct-attached copper cable (DAC), optical transceiver routers and 200G switches. Modules and cables connect switches like the MQM8700/MQM8790 to NVIDIA GPU (A100/H100/A30) and CPU servers, and storage network adapters like the ConnectX-5/6/7 VPI. They offer potential savings of up to 50% and are advantageous for GPU-accelerated high-performance computing (HPC) cluster applications, including model rendering, artificial intelligence (AI), deep learning (DL), and NVIDIA application networking in InfiniBand HDR .

InfiniBand EDR (Enhanced Data Rate):

The InfiniBand EDR product line comprises InfiniBand 100Gbase QSFP28EDR AOC, EDR DAC, AOC, and transceivers. They offer cost-effective solutions for high-performance connections in GPU-accelerated computing.

InfiniBand FDR (Fourteen Data Rate):

The InfiniBand FDR product line includes InfiniBand 40Gbase QSFP+ FDR10 AOC, DAC, and transceivers, as well as 56Gbase QSFP+ FDR DAC and AOC. These products seamlessly integrate with Mellanox EDR switches.

Advantages of InfiniBand in High-Performance Computing Networks

The continuous evolution of data communication, internet technology, and visual presentation relies on advancements in computing power, storage capacity, and network efficiency. In this landscape, the InfiniBand network emerges as a pivotal player, offering high-bandwidth network services, low latency, and reduced consumption of computing resources by offloading protocol processing and data movement from the CPU to the interconnection. These unique advantages position InfiniBand as an ideal solution for HPC data centers, driving significant performance improvements across diverse applications such as Web 2.0, cloud computing, big data, financial services, virtualized data centers, and storage applications.

In the realm of speed, InfiniBand has not only kept pace with Ethernet 100G but has surged ahead, now providing options ranging from 100G/200G to 400G/800G InfiniBand switches that align seamlessly with the high-performance requirements of HPC architecture. The combination of high bandwidth, speed, and low latency in InfiniBand switches contributes to heightened server efficiency and application productivity.

Scalability stands out as another key advantage of InfiniBand, with a single subnet supporting up to 48,000 nodes at Network Layer 2. In contrast to Ethernet, InfiniBand eliminates reliance on broadcast mechanisms like ARP, mitigating broadcast storms and averting additional bandwidth waste. Moreover, multiple subnets can be associated with switches and switches, enhancing flexibility.

FS, recognizing the importance of high-performance computing, offers InfiniBand products built with Quantum InfiniBand switch devices. These products support non-blocking bandwidth up to 16Tb/s and boast port-to-port delays lower than 130ns, ensuring high availability and multi-service support for HPC data centers. While Ethernet networks remain a viable option for data transmission by distributing workloads across multiple devices, FS provides a range of multiple-speed Ethernet switches to assist in constructing versatile and efficient networks.

Choosing the Right Network

InfiniBand and Ethernet each find their respective applications in distinct scenarios. The InfiniBand network excels in significantly enhancing data transfer rates, thereby optimizing network utilization and alleviating the burden on CPU resources for network data processing. This pivotal advantage positions the InfiniBand network as the primary solution in the high-performance computing industry.

However, in cases where communication delay is not a critical consideration among data center nodes, and the emphasis lies on flexible access and expansion, Ethernet networks can serve as viable solutions for extended periods.

The InfiniBand network's unparalleled performance, coupled with its innovative technical architecture and streamlined high-performance network design, empowers users in HPC data centers to optimize business performance. By minimizing delays associated with multi-level architecture layers and facilitating the seamless upgrade of access bandwidth for key computing nodes, InfiniBand technology plays a crucial role in enhancing overall operational efficiency. As its popularity continues to grow, InfiniBand networks are anticipated to find applications in an expanding array of usage scenarios.

You might be interested in

Knowledge
Knowledge
Knowledge
See profile for Sheldon.
Sheldon
Decoding OLT, ONU, ONT, and ODN in PON Network
Mar 14, 2023
385.1k
Knowledge
See profile for Irving.
Irving
What's the Difference? Hub vs Switch vs Router
Dec 17, 2021
367.1k
Knowledge
See profile for Sheldon.
Sheldon
What Is SFP Port of Gigabit Switch?
Jan 6, 2023
334.6k
Knowledge
See profile for Migelle.
Migelle
PoE vs PoE+ vs PoE++ Switch: How to Choose?
Mar 16, 2023
419.9k
Knowledge
Knowledge
Knowledge
Knowledge
See profile for Moris.
Moris
How Much Do You Know About Power Cord Types?
Sep 29, 2021
293.7k