Need for Speed – InfiniBand Network Bandwidth Evolution

Posted on Jan 9, 2024 by

In the late 1990s, data centers and high-performance computing environments were experiencing significant growth in processing power. Computers and servers were becoming increasingly powerful too. However, as these systems became more powerful, the existing networking technologies were not keeping up with the increased demand for data throughput in high-performance computing (HPC) environments. The need for a high bandwidth, low latency, and more reliable interconnect was becoming evident. Based on this, InfiniBand came into being.

Why Is InfiniBand Network Bandwidth Evolution So Important?

  • Enhanced Performance: InfiniBand's high bandwidth can facilitate the efficient exchange of large volumes of data between nodes, processors, and storage systems, reducing the time required for computations and analysis. Therefore, it enables faster data transfer rates, allowing for accelerated data processing and improved overall system performance.

  • Improved Scalability: As data-intensive workloads continue to grow, the ability to handle large-scale parallel processing is crucial. High bandwidth Infiniband network can accommodate growing workloads and node numbers, enabling the expansion of computing clusters and the seamless integration of additional nodes to ensure efficient communication and maintain performance levels.

  • Reduced Latency: InfiniBand's low-latency minimizes delays in data transmission, enabling faster response times and real-time processing. This is particularly important for applications that require immediate feedback, such as financial transactions, scientific simulations, and real-time analytics.

  • Future-Proofing: InfiniBand's ability to provide high-speed, high-bandwidth communication positions it as a future-proof solution for handling the ever-increasing demands of data-intensive applications.

Importance of InfiniBand

The Evolution Journey of InfiniBand Network Bandwidth

The evolution of InfiniBand network bandwidth has seen several significant milestones. The initial InfiniBand standard supported data transfer rates of 2.5Gb/s and 10Gb/s. However, as technology advanced and demands grew, subsequent standards quickly emerged, enabling even higher transfer rates. Today, InfiniBand networks can achieve speeds of up to 800Gb/s.

InfiniBand Roadmap

The Early Days: SDR and DDR

Single Data Rate (SDR): In the early 2000s, InfiniBand made its mark with SDR technology, offering a bandwidth of 10Gb/s for a 4X link. This was a significant step forward for data centers requiring more bandwidth than the gigabit Ethernet could provide. InfiniBand SDR is primarily used to build low-latency high-performance computing and data center interconnect networks. While relatively lower in terms of speed, it still offers higher performance and scalability compared to traditional Ethernet in early application scenarios.

Double Data Rate (DDR): In a few years, DDR InfiniBand doubled the available transfer rates to 20Gb/s, allowing for even more data-intensive applications to flourish. Compared to SDR, DDR provides twice the data transfer rate, further enhancing bandwidth and performance. InfiniBand DDR finds widespread use in high-performance computing and data centers, supporting larger clusters and more complex application requirements.

Moving Forward: QDR, FDR, and EDR

Quad Data Rate (QDR): As HPC demands grew, the InfiniBand architecture kept pace. In 2008, the speeds doubled up to 40Gb/s with QDR InfiniBand, which further reduced communication latencies and opened the door for advancements in fields such as genomics and climate modeling. And it was also when Mellanox got into the cable business with its LinkX line.

Fourteen Data Rate (FDR): Next came FDR InfiniBand, which achieved 56Gb/s surpassing the speed of QDR in 2011. This iteration was about more than just speed. It also improved signal processing and introduced new error correction mechanisms to enhance data integrity, and support advanced features like adaptive routing and quality of service (QoS). InfiniBand FDR enables even higher performance and scalability for demanding workloads in HPC and data center environments.

''Also Check- What Is Quality of Service (QoS) in Networking?

Enhanced Data Rate (EDR): The introduction of EDR InfiniBand was another leap forward in 2015, with bandwidth soaring to 100Gb/s. EDR's arrival coincided with the rise of big data analytics and machine learning, where enormous data sets required rapid movement between servers and storage systems. InfiniBand EDR provides exceptional bandwidth, low-latency communication, and advanced features such as forward error correction (FEC) for improved reliability. It empowers high-performance computing and data center environments with the ability to handle massive data workloads and demanding applications.

The Current Landscape: HDR, NDR, and XDR

High Data Rate (HDR): In 2018, the HDR InfiniBand reached 200Gb/s, supporting the most demanding computational tasks, from precision medicine to real-time data processing in autonomous vehicles. HDR InfiniBand's capacity to handle vast data volumes is instrumental in the development of exascale computing systems, which aim to perform a billion billion calculations per second.

Next Data Rate (NDR): The next milestone in the progression of InfiniBand technology in 2021 is NDR InfiniBand, which boasts 400Gb/s ports utilizing four lanes. Nvidia is currently implementing this advancement with its Quantum-2 ASICs. The Quantum-2 ASICs are equipped with 256 SerDes running at 51.6 GHz and utilize PAM4 encoding. With an aggregate bandwidth of 25.6Tb/s in one direction or 51.2Tb/s in both directions, these ASICs can handle an impressive 66.5 billion packets per second, enabling the delivery of 64 ports operating at 400Gb/s.

eXtented Data Rate (XDR): Continuing along the InfiniBand roadmap, the next step is the XDR speed in 2023, which offers a remarkable 800Gb/s per port. Looking further ahead, although there may be additional advancements, the projected ultimate goal is achieving 1.6Tb/s with four lanes per port. It's worth noting that with each advancement on the InfiniBand roadmap, there might be a slight increase in latency due to forward error correction. However, Nvidia is incorporating other technologies to mitigate this latency increase. It's also important to highlight that Ethernet will experience similar forward error correction and subsequent port latency increases. Therefore, the latency gap between InfiniBand and Ethernet will more or less remain consistent.

Looking ahead, the introduction of Gxx Data Rate (GDR) technology is on the horizon. GDR is expected to surpass XDR with even higher bandwidth capabilities, charting a future where InfiniBand will continue to play a key role in breakthroughs across the technological landscape.

Data Rate Time 1X Link Bandwidth 4X Link Bandwidth 12X Link Bandwidth
Single Data Rate (SDR) 2002 2.5Gb/s 10Gb/s 30Gb/s
Double Data Rate (DDR) 2005 5Gb/s 20Gb/s 60Gb/s
Quad Data Rate (QDR) 2008 10Gb/s 40Gb/s 120Gb/s
Fourteen Data Rate (FDR) 2011 14Gb/s 56Gb/s 168Gb/s
Enhanced Data Rate (EDR) 2015 25Gb/s 100Gb/s 300Gb/s
High Data Rate (HDR) 2018 50Gb/s 200Gb/s 600Gb/s
Next Data Rate (NDR) 2021 100Gb/s 400Gb/s 1200Gb/s
eXtended Data Rate (XDR) 2023 200Gb/s 800Gb/s 2400Gb/s
Gxx Data Rate (GDR) 2025 (planned) 400Gb/s 1600Gb/s 4800Gb/s

FS Infiniband Network Product List

By offering higher network bandwidth and low latency connections, InfiniBand finds extensive applications in high-performance computing, cloud computing, big data processing, real-time data applications, and storage systems, among others. Following is the FS Infiniband network product list for your reference.

Category Product Type Model Description  
InfiniBand Modules/DAC/AOC

800G NDR InfiniBand OSFP-SR8-800G、OSFP-DR8-800G、OSFP-2FR4-800G、OSFP-2FR4-800G、OSFP-800G-2OFLPC01、OSFP-800G-4OFLPC01 FS NVIDIA InfiniBand modules provide high-speed, low-latency communication for clusters and supercomputers, while InfiniBand cables offer reliable and high-bandwidth cabling solutions for efficient data transmission in data centers.
400G NDR InfiniBand OSFP-SR4-400G-FL、OSFP-DR4-400G-FL、OSFP-400G-2QPC01、OSFP-400G-2QAO03  
200G HDR InfiniBand OSFP-SR4-400G-FL、OSFP-DR4-400G-FL、OSFP-400G-2QPC01、OSFP-400G-2QAO03  
100G EDR InfiniBand QSFP28-SR4-100GE、QSFP28-IR4-100GE、QSFP28-PIR4-100G、QSFP28-LR4-100GE、Q28-PC01E、Q28-AO01  
56/40G FDR InfiniBand QSFP-SR4-40G、QSFP-CSR4-40G、QSFP-LR4-40G、QSFP-LR4-40G、QSFP-PC01、QSFP-AO01、QSFP56-PC01、QSFP56-AO01  
InfiniBand NICs NVIDIA® InfiniBand Adapters MCX653105A-HDAT-SP、MCX653106A-HDAT-SP、 MCX653105A-ECAT-SP、MCX653106A-ECAT-SP、MCX75510AAS-NEAT FS NVIDIA InfiniBand Adapters including ConnectX-6 and ConnectX-7 cards enable the highest performance and most flexible solution aimed at meeting the continually growing demands of data center applications.
InfiniBand Switches NVIDIA® InfiniBand Switches MQM9700-NS2F、MQM9790-NS2F、MQM8700-HS2F、MQM8790-HS2F FS provided 200G HDR and 400G NDR NVIDIA InfiniBand switches, which have a latency of less than 130ns and high bandwidth for data centers.

What Is the Future Trend of InfiniBand Network Bandwidth?

The evolution of InfiniBand network bandwidth is a testament to the technology's adaptability and its critical role in advancing HPC. As datasets grow and computational needs become increasingly complex, InfiniBand's progression offers a glimpse into the future of networking technologies, where speed, efficiency, and reliability are paramount. As we anticipate further enhancements like GDR, it is clear that InfiniBand's journey is far from over, it will continue to push the boundaries of what is possible in high-performance computing and underpinned the complex computational infrastructure needed to solve some of our world's most challenging problems.

''Also Check- Getting to Know About InfiniBand

You might be interested in

See profile for Sheldon.
Decoding OLT, ONU, ONT, and ODN in PON Network
Mar 14, 2023
See profile for Irving.
What's the Difference? Hub vs Switch vs Router
Dec 17, 2021
See profile for Sheldon.
What Is SFP Port of Gigabit Switch?
Jan 6, 2023
See profile for Migelle.
PoE vs PoE+ vs PoE++ Switch: How to Choose?
Mar 16, 2023
See profile for Moris.
How Much Do You Know About Power Cord Types?
Sep 29, 2021