English

Getting to Know About InfiniBand

Posted on Dec 19, 2023 by
2.2k

The surge in AI large models represented by ChatGPT in 2023 has significantly amplified the focus on InfiniBand technology. This is because the networks used by GPT models are built on InfiniBand, developed by NVIDIA. So, what exactly is InfiniBand? Why is it highly acclaimed? This article will help you have a deep understanding of InfiniBand.

What Is InfiniBand and How Does It Work?

InfiniBand is a high-speed, low-latency interconnect technology primarily used in data centers and high-performance computing (HPC) environments. It provides a high-performance fabric for connecting servers, storage devices, and other network resources within a cluster or data center.

InfiniBand uses a two-layer architecture that separates the physical and data link layers from the network layer. The physical layer uses high-bandwidth serial links to provide direct point-to-point connectivity between devices. In contrast, the data link layer handles the transmission and reception of data packets between devices. The network layer provides the critical features of InfiniBand, including virtualization, quality of service (QoS), and remote direct memory access (RDMA). These features make InfiniBand a powerful tool for HPC workloads that require low latency and high bandwidth.

For more details about InfiniBand technology, check What Makes InfiniBand Stand Out?

InfiniBand work

Benefits of InfiniBand

  • High Speed and Scalability: InfiniBand offers high-speed communication with scalability. The IBTA standard supports Single Data Rate (SDR) signaling at a rate of 2.5Gb/s (1X) and 10Gb/s (4X) and can reach up to 30Gb/s (12X). InfiniBand's Double Data Rate (DDR) and Quad Data Rate (QDR) signaling further enables scaling to 5Gb/s and 10Gb/s per lane, respectively, allowing for a maximum data rate of 120Gb/s over 12X cables.

  • Low Latency: Ethernet latencies typically range from 20-80 microseconds due to the need to strip down and reassemble data packets. In contrast, InfiniBand achieves low latency, typically ranging from 3-5 microseconds, with some manufacturers claiming latencies as low as 1-2 microseconds. InfiniBand's low latency is crucial for clustered computing applications, as it affects overall performance and the speed at which applications can access data. InfiniBand's ability to minimize latency makes it well-suited for applications that require fast processing of tightly coupled requests, such as financial trading data centers.

  • Low Power Consumption: InfiniBand technology is designed to reduce power consumption, which is a significant concern for network managers. InfiniBand requires less power compared to currently available 10Gb/s Ethernet technologies. For example, first-generation 10GBASE-SR and 10GBASE-T Ethernet server adapter cards have higher power consumption (10-25W) compared to InfiniBand adapter cards (3-4W for copper and 1W for fiber). The reduced power draw of InfiniBand becomes particularly advantageous in clustered data center architectures with hundreds or thousands of nodes, leading to lower total cost of operation and greener data center facilities.

InfiniBand Applications

Nowadays, InfiniBand technology has revolutionized how supercomputers, storage systems, data centers, clusters operate, and HPC. Its high-speed data transfer capabilities make it a key solution in several industries. Its high-speed capabilities, low latency, and scalability make it a preferred choice for demanding computing environments.

InfiniBand in Supercomputing: Powering the World’s Fastest Computers

InfiniBand serves as the driving force behind the world's fastest supercomputers, ensuring their unparalleled performance. This technology enables a remarkable level of parallelism, enabling the simultaneous processing of vast amounts of data. In the realm of high-performance computing (HPC) systems, InfiniBand remains at the core, finding applications in diverse scientific fields such as physics, biology, and meteorology. As per the rankings on the Top500 website, which identifies the swiftest supercomputers globally, InfiniBand technology powers over 70% of the systems featured within the top 500.

InfiniBand in High-Performance Storage Systems

In high-performance storage systems, InfiniBand technology assumes a crucial role as an indispensable component. Its high-speed capabilities facilitate rapid data communication between servers and storage systems. Remarkably low latency and extraordinary bandwidth prove particularly advantageous for data-intensive applications that necessitate swift access to extensive data repositories. Moreover, InfiniBand technology contributes to reducing storage system latency in database applications, leading to accelerated query processing and transaction handling.

InfiniBand in Data Centers and Clusters

Within the dynamic environments of data centers and clusters, InfiniBand technology interconnects servers and various devices, including storage systems. By harnessing the high-speed communication capabilities of InfiniBand, clusters can operate seamlessly as a unified entity, thereby enhancing performance for applications that rely on parallel computing. Additionally, InfiniBand presents a dependable and scalable interconnect for virtualization platforms, optimizing the utilization of server resources.

InfiniBand in HPC Workloads

InfiniBand's exceptional bandwidth and low-latency technology assume a pivotal role in HPC workloads. It enables the simultaneous processing of vast data volumes, thereby boosting the performance of scientific research applications. InfiniBand also finds application in demanding workloads, such as financial modeling, oil and gas exploration, and other data-intensive tasks.

InfiniBand Q&A

Q: How does InfiniBand compare to other networking technologies like Ethernet?

A: InfiniBand generally provides higher data transfer speeds and lower latency compared to standard Ethernet. It is particularly well-suited for high-performance computing and applications that require direct memory access.

If you want to know the difference between CXL and InfiniBand, you can check this post CXL vs Infiniband: Which One to Choose for High-speed Interconnection?

Q: What is RDMA?

A: RDMA stands for Remote Direct Memory Access, a technology that enables direct data transfer between the memory of two computers without CPU involvement. Infiniband supports RDMA, ensuring fast and efficient data transfers.

''Also Check- RDMA over Converged Ethernet Guide

Q: How does Infiniband transfer data?

A: Infiniband utilizes the RDMA protocol for data transfer. RDMA allows for direct memory access between machines without CPU involvement. This enables fast and efficient data transfers with minimal latency.

Q: What is the InfiniBand Trade Association (IBTA)?

A: The IBTA is an industry consortium that develops and promotes InfiniBand technology. It works with member companies to define and maintain industry standards for InfiniBand.

Q: What are the different generations of InfiniBand technology?

A: InfiniBand has seen several generations of technology, including QDR (Quad Data Rate), FDR (Fourteen Data Rate), EDR (Enhanced Data Rate), and HDR (High Data Rate). Each generation offers higher bandwidth and improved performance.

Q: What is the future outlook for InfiniBand technology?

A: InfiniBand continues to evolve, with advancements in bandwidth, latency, and scalability. It is expected to remain a key technology in high-performance computing, data centers, and other demanding applications in the foreseeable future.

Explore InfiniBand Products

As the need for faster data transfer speeds continues to rise, InfiniBand network systems have gained popularity as an ideal solution for data centers. InfiniBand offers high-speed connections, making it a preferred choice for applications that demand minimal latency and superior throughput.

NVIDIA InfiniBand Switches

A crucial component in an InfiniBand network system is its switches, responsible for directing data between connected devices. These switches operate at the physical layer of the network, ensuring data transmission at the highest achievable speed. FS provided HDR 200Gb/s and NDR 400Gb/s NVIDIA InfiniBand switches, which have a latency of less than 130ns and high bandwidth for data centers.

InfiniBand Switches

NVIDIA InfiniBand Transceivers and Cables

The backbone of the InfiniBand network relies on transceivers and cables, facilitating high-speed data transfer between devices. These cables come in different lengths, ranging from short distances to extensive configurations, allowing for flexible network setups. One notable attribute of InfiniBand cables and connectors is their utilization of active copper technology. This technology amplifies signals within the wires, enhancing signal integrity and minimizing signal loss, particularly over longer distances. FS helps boost network efficiency with 40G, 56G, 100G, 200G, 400G, and 800G NVIDIA Infiniband Transceivers and DAC/AOC Cables. Below is the product picture.

InfiniBand Modules

NVIDIA InfiniBand Adapters

InfiniBand adapters, functioning as network interface cards (NICs), enable devices to connect to InfiniBand networks. FS NVIDIA InfiniBand Adapters including ConnectX-6 and ConnectX-7 cards enable the highest performance and most flexible solution aimed at meeting the continually growing demands of data center applications.

InfiniBand Nics

You might be interested in

Knowledge
Knowledge
Knowledge
See profile for Sheldon.
Sheldon
Decoding OLT, ONU, ONT, and ODN in PON Network
Mar 14, 2023
386.2k
Knowledge
See profile for Irving.
Irving
What's the Difference? Hub vs Switch vs Router
Dec 17, 2021
367.6k
Knowledge
See profile for Sheldon.
Sheldon
What Is SFP Port of Gigabit Switch?
Jan 6, 2023
335.5k
Knowledge
See profile for Migelle.
Migelle
PoE vs PoE+ vs PoE++ Switch: How to Choose?
Mar 16, 2023
420.5k
Knowledge
Knowledge
Knowledge
Knowledge
See profile for Moris.
Moris
How Much Do You Know About Power Cord Types?
Sep 29, 2021
294.7k