English

The Role of InfiniBand Application in DGX Clusters

Posted on May 15, 2024 by
343

As data analysis research advances, high-performance computing (HPC) has become key in scientific and industrial realms. NVIDIA's DGX clusters, built for deep learning, rely on high-speed and reliable network connections to operate effectively. This article will reveal how InfiniBand networking technology and its application in DGX clusters optimize communication and enhance computational performance.

DGX-1 equipped with InfiniBand Adapters

The DGX-1 system is equipped with four 100 Gbps EDR InfiniBand adapters and two 10 Gbps Ethernet connections for seamless integration into communication and storage networks, as shown in the diagram below. At its core, pairs of GPUs are linked via a system motherboard's PCIe switch to InfiniBand connections, streamlining data flow, reducing delay, and increasing transfer rates.

Seamless integration

A solitary InfiniBand card can connect DGX units, but it's a suboptimal choice due to traffic congestion between CPU links. Deploying two cards per CPU ensures a more even distribution and faster performance, with the ideal setup involving four cards. A standard 36-port IB switch can accommodate nine DGX-1s using four ports each for a maximum 400 Gbps bandwidth. For less intensive needs, 18 DGX-1s can connect to two ports on the same switch. Although not advised, a single-card connection is possible, allowing 36 DGX-1 units on one switch.

Two Tiers Switching Network

If managing numerous DGX-1 units, consider a two-layer switch setup. Typically, this consists of a 36-port leaf switch linked to a larger core, often a director-class switch with up to 648 ports. Multiplying core switches is an option, though this adds complexity.

Two-layer switch setup

Two-layer switch setup

In a two-tier switching network, if each DGX-1 device utilizes all 4 IB cards to connect to the 36-port switch without over-subscription, a maximum of 4 DGX-1 devices can be connected per switch. With 16 ports in use for each DGX-1, there are 16 uplink links from the leaf switch to the core switch. By connecting 40 36-port leaf switches to a 648-port core switch (648/16), up to 160 DGX-1 devices (640 cards in total) can be connected with balanced symmetric bandwidth distribution.

Subnet Manager

InfiniBand networks rely on a subnet manager (SM) to manage the network. While there is always one SM in charge, additional SMs can be prepared as backups in case of failure. The decision of where and how many SMs to run significantly impacts cluster design.

The first decision is whether to run the SM on IB switches (hardware SM) or dedicated servers (software SM). FS can provide two InfiniBand switches that support SM: MQM8700-HS2F and MQM9700-NS2F. Running the SM on switches eliminates the need for extra servers but may face challenges with heavy traffic. For large networks, it is advisable to use software SMs on dedicated servers.

The second decision is the number of SMs to run. At least one SM is required, and a single hardware SM suffices for small DGX-1 clusters. However, as the number of devices increases, running two SMs simultaneously ensures high availability (HA). HA becomes crucial when more users rely on the cluster, and cluster failures have a greater impact.

With a growing number of devices, it becomes preferable to use dedicated servers for SMs. At least two SMs are necessary for the cluster, ideally on two separate dedicated servers.

FS NDR 800G Optical Interconnect Solution

FS presents a versatile optical interconnect solution tailored for NDR 800G applications. This solution features the OSFP port design for its NDR 800G switch, which incorporates eight channels per interface, each channel operating with 100 Gb/s SerDes. When considering transmission rates, FS's approach includes two primary configurations: duplex 800G connections and split 800G into dual 400G.

The 800G NDR cable and transceiver series offer a diverse range of products suitable for network switches and adapter systems in data centers, especially for Al computing systems with lengths of up to 2 kilometers. These products are designed to optimize Al and accelerated computing applications by minimizing data retransmission through low latency, high bandwidth, and exceptionally low bit error rates (BER).

Conclusion

Through a comprehensive analysis of InfiniBand network applications in DGX clusters, it is evident that they play a crucial role in high-performance computing. InfiniBand networks provide fast and reliable communication solutions, enabling DGX clusters to achieve efficient data transfer and collaborative computing. Looking ahead, as technology continues to advance, InfiniBand networks will continue to be instrumental in driving the development of HPC and deep learning. By deploying and configuring appropriate InfiniBand networks, enterprises and research institutions can fully leverage the powerful computational capabilities of DGX clusters, accelerating innovation and achieving further success.

You might be interested in

Knowledge
Knowledge
Knowledge
See profile for Sheldon.
Sheldon
Decoding OLT, ONU, ONT, and ODN in PON Network
Mar 14, 2023
412.1k
Knowledge
See profile for Irving.
Irving
What's the Difference? Hub vs Switch vs Router
Dec 17, 2021
375.8k
Knowledge
See profile for Sheldon.
Sheldon
What Is SFP Port of Gigabit Switch?
Jan 6, 2023
356.9k
Knowledge
See profile for Migelle.
Migelle
PoE vs PoE+ vs PoE++ Switch: How to Choose?
May 30, 2024
439.8k
Knowledge
Knowledge
Knowledge
Knowledge
See profile for Moris.
Moris
How Much Do You Know About Power Cord Types?
Sep 29, 2021
315.8k