English

InfiniBand, What Exactly Is It?

Posted on Mar 8, 2024 by
570

InfiniBand, a high-speed networking technology, has seen significant developments since its inception. The strong rise of large AI models represented by ChatGPT has also increased the attention of InfiniBand. Understanding its journey, technical principles, advantages, product offerings, and prospects is essential for those interested in cutting-edge networking solutions.

The Development Journey of InfiniBand

InfiniBand's inception dates back to the late 1990s when the InfiniBand Trade Association (IBTA) was formed to develop and promote the technology. Initially envisioned as a solution for high-performance computing clusters, InfiniBand has since expanded its reach into various other domains, including data centers, cloud computing, and artificial intelligence.

Technical Principles of InfiniBand

After outlining the evolution of InfiniBand, let's delve into its working principle and the reasons behind its superiority over traditional Ethernet.

RDMA: The Foundational Capability

Remote Direct Memory Access (RDMA) is a fundamental feature of InfiniBand, allowing data to be transferred directly between application memory spaces without involving the CPU. This capability significantly reduces latency and enhances overall system efficiency.

RDMA

InfiniBand Network and Architecture

InfiniBand employs a switched fabric architecture, where multiple nodes are interconnected through switches. This architecture provides high bandwidth and low latency communication between nodes, making it ideal for demanding applications.

InfiniBand is a channel-based structure, and its component units can be broadly categorized into four main groups:

  • HCA (Host Channel Adapter): This unit serves as the interface between the host system and the InfiniBand network. It facilitates the transmission of data between the host and other devices connected to the network.

  • TCA (Target Channel Adapter): Opposite to the HCA, the TCA operates on the target devices within the InfiniBand network. It manages data reception and processing on these target devices.

  • InfiniBand link: This forms the physical connection channel within the InfiniBand network. It can be established using various mediums such as cables, optical fibers, or even on-board links within devices.

  • InfiniBand switches and routers: These components play a crucial role in facilitating networking within the InfiniBand infrastructure. They manage the routing of data packets between different devices connected to the network, enabling seamless communication and data exchange.

    InfiniBand Network and Architecture

For more detailed information, please refer to InfiniBand Network and Architecture Overview

InfiniBand Protocol Stack

InfiniBand utilizes a layered protocol stack, including physical, link, network, transport, and application layers. Each layer plays a crucial role in ensuring efficient and reliable communication between InfiniBand devices.

InfiniBand Protocol Stack

InfiniBand Link Rates

InfiniBand supports multiple link rates, including Single Data Rate (SDR), Double Data Rate (DDR), Quad Data Rate (QDR), and beyond, with each successive generation offering higher bandwidth and improved performance.

Benefits of InfiniBand

InfiniBand stands out from traditional networking technologies due to several key advantages:

  • High-Speed Data Transfer: InfiniBand excels in delivering exceptionally high data transfer rates, facilitating swift and efficient communication between nodes within the network. This rapid exchange of data is instrumental in supporting demanding applications and workloads that require substantial throughput.

  • Low Latency: InfiniBand achieves low latency via RDMA and a switched fabric architecture. RDMA enables direct data transfers between memory locations, reducing processing overhead. The switched fabric architecture ensures efficient data routing, crucial for latency-sensitive applications like high-frequency trading and scientific computing.

  • Scalability: InfiniBand's scalable architecture meets the evolving needs of modern data centers. It seamlessly expands network capacity to accommodate increasing workloads and supports dynamic resource allocation. Whether scaling up for growing data volumes or out to support additional compute resources, InfiniBand offers the flexibility needed to adapt effectively.

InfiniBand Products

FS offers a comprehensive range of InfiniBand products, including switches, adapters, transceivers and cables, catering to various networking requirements. These products are designed to deliver high performance, reliability, and scalability, meeting the needs of modern data center environments.

InfiniBand Switches

Vital for directing data within InfiniBand networks, these switches ensure high-speed data transmission at the physical layer. FS offers HDR 200Gb/s and NDR 400Gb/s switches with latency under 130ns, ideal for data centers demanding exceptional bandwidth.

 

MQM8790-HS2F

MQM8700-HS2F

MQM9700-NS2F

MQM9790-NS2F

Product
Link Speed
200Gb/s
200Gb/s
400Gb/s
400Gb/s
Ports
40
40
32
32
Switch Chip
NVIDIA QUANTUM
NVIDIA QUANTUM
NVIDIA QUANTUM-2
NVIDIA QUANTUM-2
Switching Capacity
16Tb/s
16Tb/s
51.2Tb/s
51.2Tb/s
Fan Number
5+1 Hot-swappable
5+1 Hot-swappable
6+1 Hot-swappable
6+1 Hot-swappable
Power Supply
1+1 Hot-swappable
1+1 Hot-swappable
1+1 Hot-swappable
1+1 Hot-swappable

InfiniBand Adapters

Serving as network interface cards (NICs), InfiniBand adapters enable devices to connect to InfiniBand networks. FS offers ConnectX-6 and ConnectX-7 cards, providing top performance and flexibility to meet the evolving demands of data center applications.

 

MCX75310AAC-NEAT

MCX715105AS-WEAT

MCX653105A-HDAT-SP

MCX653106A-HDAT-SP

MCX653105A-ECAT-SP

MCX653106A-ECAT-SP

MCX75510AAS-NEAT

Product
 
Ports
Single-Port OSFP
Single-Port QSFP112
Single-Port QSFP56
Dual-Port QSFP56
Single-Port QSFP56
Dual-Port QSFP56
Single-Port OSFP
PCIe Interface
PCIe 5.0x 16
PCIe 5.0x 16
PCIe 4.0x 16
PCIe 4.0x 16
PCIe 4.0x 16
PCIe 4.0x 16
PCIe 5.0x 16
Support InfiniBand Data Rate
NDR/NDR200/HDR/
HDR100/EDR/FDR
/SDR
NDR/NDR200/HDR/
HDR100/EDR/FDR
/SDR
HDR/EDR/FDR/QDR/
DDR/SDR
HDR/EDR/FDR/QDR/
DDR/SDR
HDR100/EDR/FDR/
QDR/DDR/SDR
HDR100/EDR/FDR/
QDR/DDR/SDR
NDR200/NDR/HDR/
EDR/FDR/SDR
Support Ethernet Data Rate
400/200/100/50/40/
10/1 Gb/s
400/200/100/50/25/
10 Gb/s
200/50/40/25/20/1 Gb/s
200/50/40/25/20/1 Gb/s
100/50/40/25/20/1 Gb/s
100/50/40/25/20/1 Gb/s
-

InfiniBand Transceivers and Cables

The backbone of InfiniBand networks relies on transceivers and cables for high-speed data transfer. FS provides a range of transceivers and DAC/AOC cables, including 40G, 56G, 100G, 200G, 400G, and 800G options. Utilizing active copper technology enhances signal integrity and minimizes loss over varying distances.

 

800G NDR InfiniBand

400G NDR InfiniBand

200G HDR InfiniBand

100G EDR InfiniBand

56G FDR InfiniBand

40G FDR InfiniBand

 
Product
 
Modules
850nm 50m
1310nm 500m
1310nm 2km
850nm 50m
1310nm 500m
850nm 100m
1310nm 2km
850nm 100m
1310nm 500m
1310nm 2km
1310nm 10km
-
 850nm 150m
 
AOC
-
3m, 5m, 10m
1m, 2m, 3m, 5m, 7m, 10m, 15m, 20m, 30m, 50m, 100m
1m, 2m, 3m, 5m, 7m, 10m, 15m, 20m, 30m, 50m, 100m
1m, 2m, 3m, 5m, 10m, 15m, 20m, 25m, 30m, 50m, 100m
 1m, 3m, 5m, 7m, 10m, 15m, 20m, 25m, 30m, 50m, 100m
 
DAC
0.5m, 1m, 1.5m, 2m
1m, 1.5m, 2m
0.5m, 1m, 1.5m, 2m
0.5m, 1m, 1.5m, 2m, 3m
0.5m, 1m, 1.5m, 2m, 3m, 4m, 5m
0.5m, 1m, 1.5m, 2m, 3m, 4m, 5m 
 

Future Prospects of InfiniBand

InfiniBand is poised to continue its growth trajectory, fueled by advancements in high-performance computing, artificial intelligence, and cloud computing. Emerging technologies such as Exascale computing and High-Performance Data Analytics (HPDA) will further drive the adoption of InfiniBand, cementing its position as a critical component of next-generation data center architectures. InfiniBand remains at the forefront, enabling high-performance, low-latency communication essential for today's demanding applications.

You might be interested in

Knowledge
Knowledge
Knowledge
See profile for Sheldon.
Sheldon
Decoding OLT, ONU, ONT, and ODN in PON Network
Mar 14, 2023
386.1k
Knowledge
See profile for Irving.
Irving
What's the Difference? Hub vs Switch vs Router
Dec 17, 2021
367.5k
Knowledge
See profile for Sheldon.
Sheldon
What Is SFP Port of Gigabit Switch?
Jan 6, 2023
335.5k
Knowledge
See profile for Migelle.
Migelle
PoE vs PoE+ vs PoE++ Switch: How to Choose?
Mar 16, 2023
420.5k
Knowledge
Knowledge
Knowledge
Knowledge
See profile for Moris.
Moris
How Much Do You Know About Power Cord Types?
Sep 29, 2021
294.6k