English

InfiniBand and MTP Fiber: A Perfect Match for HPC Networks

Posted on Jun 26, 2024 by
183

Efficient data transfer is paramount for HPC (high-performance computing), since it needs to perform complex simulations, computations, and data analysis that are beyond the capabilities of typical commercial computing solutions. InfiniBand, a high-speed communication protocol, is adopted in HPC for its high bandwidth and low latency features. Integral to this setup is the use of MTP® connector products, which play an important role in ensuring that InfiniBand networks operate at their peak. This article explores why InfiniBand and MTP® fiber are perfect for HPC connections, and the practical applications of MTP® fiber in HPC networks.

InfiniBand: Set Reliable Protocol for HPC

HPC networks are critical for advanced computational tasks, simulations, and data analysis in various scientific, industrial, and commercial fields. They require rapid data exchange between numerous computing nodes and storage systems. For example, in fields like genomics or ML (Machine Learning) model training, massive datasets must be processed and analyzed. InfiniBand supports these tasks by providing the necessary high bandwidth, low latency, scalability and reliability.

High bandwidth and low latency: Developed by the InfiniBand Trade Association (IBTA), InfiniBand is a computer networking communications standard that can achieve high throughput rates like HDR 200Gbps, and NDR 400Gbps/800Gbps, allowing data to be transmitted simultaneously across several channels. And its latency is typically in the range of microseconds. These are crucial for HPC applications like weather modeling, molecular dynamics, and financial simulations that require rapid data transfer and real-time processing.

Scalability and reliability: With great scalability that can support tens of thousands of nodes in a single subnet, InfiniBand is the preferred choice for HPC environments. Additionally, InfiniBand offers advanced features such as QoS (Quality of Service) or error detection and correction, which enhance the reliability and robustness of the network. These features ensure that HPC applications can run smoothly and without interruption, even in the presence of hardware failures or network congestion. And its RDMA (Remote Direct Memory Access), which allows data to be transferred directly between the memory of different computers without involving the CPU. This capability reduces the overhead on processors, freeing them up to perform more computational tasks, then further enhancing the performance of HPC applications. Therefore, InfiniBand enables HPC to meet the ever-increasing demands of scientific research, complex simulations, and big data analytics.

MTP® Fiber: Builds Physical Infrastructure for HPC

Note that, at the heart of all connections lies its physical infrastructure in data centers, including the network cabling system. To support these high bandwidth and minimal delay HPC applications, the MTP® connector interface plays a crucial role in facilitating high-speed data transmission within data centers. MTP® fiber is ideally suited for these needs due to several key characteristics:

High Bandwidth and Low Latency: High Bandwidth and Low Latency: HPC networks, especially those operating at 400G and 800G, require high-bandwidth connections to handle large volumes of data. And latency, affecting the speed at which data is transferred and processed, is also important in HPC networks. MTP® connectors can accommodate multiple fibers (typically 8, 12, 24, or more) in a single connector, enabling high-bandwidth connections that are essential for supporting HPC workloads. In addition, MTP® connectors with angle-polished (APC) type can provide a minimal reflection of the optical signal for optimal signal integrity, meeting the stringent optical surface requirements of IB NDR transceivers.

Density and Space Efficiency: Data centers often face constraints related to physical space. MTP® connectors, with their high-density design (multiple fibers in one connector), allow multiple high-speed connections to be made within a compact form factor. This efficient use of space is crucial for maintaining orderly and scalable data center environments. Additionally, MTP® connectors can reduce the physical space required for cable management and allow for more efficient use of rack space.

Explore MTP® Fiber and InfiniBand Practices for HPC

As mentioned above, MTP® fiber can make the most of InfiniBand, offering super performance for HPC. FS offers a wide range of end-to-end InfiniBand interconnect transceivers and MTP® fibers. The following are the main 200G to 800G products that have passed tests to work with NVIDIA IB devices. For more FS InfiniBand networking solutions, please check the complete IB solutions here.

Data Rate
Fiber Mode
MTP® Jumper
MTP® Jumper
Max. Distance
800G
SMF
12FMTPSMF, APC
OSFP-DR8-800G, Finned Top PAM4 2 x DR4
500m
800G
MMF
12FMTPOM4, APC
OSFP-SR8-800G-FL, Flat Top PAM4 2 x SR4
60m@OM3/100m@OM4
800G
MMF
12FMTPOM4, APC
OSFP-SR8-800G, Finned Top PAM4 2 x DR4
30m@OM3/50m@OM4
400G
SMF
12FMTPSMF, APC
OSFP-DR4-400G-FL, Flat Top PAM4
500m
400G
MMF
12FMTPOM4, APC
OSFP-SR4-400G-FL, Flat Top PAM4
30m@OM3/50m@OM4 or OM5
200G
MMF
12FMTPOM4, UPC
QSFP-SR4-200G, 4x50G PAM4
70m@OM3/100m@OM4
 

800G to 800G for Switch to Switch Link

MTP®-12 APC OM4 jumpers support high-density connectivity by accommodating 12 fibers in a single connector. They work with InfiniBand 800Gb/s 2x400Gb/s twin-port OSFP-SR8 Finned Top, fiber transceivers, and each interface operates at 400Gb/s. In this multimode connections, MTP®-12 APC OM3 jumpers deliver data up to 30m, and up to 50m with MTP® OM4 type. The following figure shows a multimode 800G IB link from switch to switch. Two MTP®-12 fiber jumpers are plugged into two OSFP-SR8-800G multimode transceivers on two sides.

MTP in 800G IB link

For single-mode links, MTP®-12 APC OS2 jumpers support up to 500m transmission. The connection scheme diagram is the same as the above multimode type, but needs to change the multimode transceivers and MTP®-12 fiber to single-mode products.

800G to 2x 400G Link to ConnectX-7

In this link, one 800G IB switch is deployed on side A, and two 400G GPU servers equipped with 400G ConnectX-7 cards are on side B. Then two MTP®-12 fiber jumpers are used to connect 800Gb/s 2x400Gb/s twin-port OSFP-SR8 transceiver on side B, and each MTP®-12 jumper is separately plugged into two OSFP-SR4-400G-FL transceivers as the following shows.

MTP in 800G to 2x 400G IB link

For multimode connections, MTP®-12 multimode jumper works with 800G QSFP-SR8 multimode transceivers and supports up to 30m with OM3 fiber type and 50m with OM4 type. While in single-mode type, MTP®-12 single-mode jumper can be up to 500 meters with 800G QSFP-DR8 single-mode transceivers.

400G to 400G for Switch to ConnectX-7

This link delivers 400G Ethernet to 400G ConnectX-7 plugged into a 400G GPU server. For Ethernet 400G, it uses a QSFP112-SR4 optical transceiver that transmits parallel 4x 100Gb/s PAM4 signals up to 30m with OM3 or 50m with OM4, and deploys one OSFP-SR4-400G-FL IB module to be compatible with 400G ConnectX-7. This fiber transceiver uses 100G-PAM4 modulation with an MPO-12/APC optical connector. Therefore, one MTP®-12 APC OM4 fiber jumper is used to complete the 400G link.

MTP in 400G IB link

Additionally, to meet different customer's connection requirements. FS is now deeply diving into the verification of MTP® conversion cable in HPC uses. MTP®-16 APC to 2x MTP®-8 UPC conversion cable has been proven to be used in 400G to 2x 200G InfiBand networks. Next, MTP®-12 APC (Female) to 2 x MTP®-4 APC conversion cable will be tested for HPC networks, indicating that FS is always committed to creating value for customers through a full range of solutions and continuous technological innovation.

200G to 200G for Switch to Switch Link

200G IB link relies on QSFP-SR4 optical transceivers. The modules use a single, 4-channel MPO-12/UPC optical connector and transmit data via 50G-PAM4 modulation. Then, the MTP®-12 UPC fiber jumper is compatible with the modules. The 200G connection diagram is switch to switch link, and its max transmission distance is 70m when using OM3 MTP® fiber jumper while up to 100m with OM4 type.

MTP in 200G IB link

Conclusion

InfiniBand has established itself as a cornerstone technology in HPC networks due to its unmatched performance, low latency, high bandwidth, scalability, reliability, and efficient data handling capabilities. Meanwhile, the high-density and high-bandwidth characteristics of MTP® network cabling system make it the ideal choice for connecting the vast array of nodes and storage systems in HPC environments. By embracing MTP® cabling, HPC networks can ensure they meet current demands while being well-prepared for future advancements in computing power and data transfer requirements.

You might be interested in

Knowledge
See profile for Charlene.
Charlene
5 Types of Fiber Optic Cables for 5G Networks
Mar 17, 2022
19.3k
Knowledge
Knowledge
Knowledge
Knowledge
Knowledge
Knowledge
See profile for Sheldon.
Sheldon
Decoding OLT, ONU, ONT, and ODN in PON Network
Mar 14, 2023
427.9k
Knowledge
See profile for Irving.
Irving
What's the Difference? Hub vs Switch vs Router
Dec 17, 2021
380.4k
Knowledge
See profile for Sheldon.
Sheldon
What Is SFP Port of Gigabit Switch?
Jan 6, 2023
368.5k
Knowledge