English

InfiniBand vs. Ethernet: What Are They?

Updated on Mar 1, 2023 by
43.9k

As interconnection technologies, InfiniBand and Ethernet can be said to have their own characteristics and differences, and it is impossible to generalize which one is better. They continue to develop and evolve in different fields of application, and have become two indispensable interconnection technologies in our network world.

InfiniBand vs. Ethernet Network: What Are They?

InfiniBand Network

The difference between InfiniBand and Ethernet is very different from a design point of view. As a technology for interconnecting networks, InfiniBand is widely used in supercomputer clusters due to its high reliability, low latency and high bandwidth. In addition, it is the preferred network interconnection technology for GPU servers.

To achieve a raw data rate of 10Gbits/sec over 4X cables, the InfiniBand standard allows the transmission of Single Data Rate (SDR) signals at a base rate of 2.5Gbits/sec per lane. A single channel can be expanded to 5Gbits/sec and 10Gbits/sec respectively, and the potential maximum data rate is 40Gbits/sec over 4X cables and 120Gbits/sec over 12X cables, enabling InfiniBand networks to have double data rate (DDR) and quadruple data rate (QDR) signal.

InfiniBand Network

Ethernet Network

Since its introduction on September 30, 1980, the Ethernet standard has become the most widely used communication protocol in the LAN. Unlike InfiniBand, Ethernet was designed with the following main goals in mind: How can information flow easily between multiple systems? It's a typical network designed with distribution and compatibility. The traditional Ethernet mainly uses TCP/IP to build the network, and it has gradually developed into RoCE so far.

In general, Ethernet networks are primarily used to connect multiple computers or other devices such as printers, scanners, etc. to a local area network. It can not only connect the Ethernet network to the wired network through the optical fiber cable, but also realize the Ethernet network in the wireless network through wireless networking technology. Fast Ethernet, Gigabit Ethernet, 10 Gigabit Ethernet, and Switched Ethernet are all major types of Ethernet.

Ethernet Network

 

 ''Also Check- A Quick Look at the Differences: RoCE vs Infiniband RDMA vs TCP/IP

InfiniBand vs. Ethernet: What Are the Differences Between Them?

The bottleneck of cluster data transmission in high-performance computing scenarios is the original design goal of InfiniBand, and it has become an interconnection standard in line with the requirements of the times. Therefore, InfiniBand and Ethernet have many differences, mainly in terms of bandwidth, latency, network reliability, network technology, and application scenarios.

Bandwidth

Since the birth of InfiniBand, the development of InfiniBand network has been faster than that of Ethernet for a long time. The main reason is that InfiniBand is applied to the interconnection between servers in high-performance computing and reduces the CPU load. However, Ethernet is more oriented to terminal device interconnection, and there is not too high a demand for bandwidth.

For high-speed network traffic exceeding 10G, if all packets are unpacked, it will consume a lot of resources. The first generation of SDR InfiniBand operates at a rate of 10Gbps, allowing for high-speed network transmission to offload the CPU and increase network utilization in addition to increasing data transmission bandwidth and decreasing CPU load.

Network Latency

InfiniBand and Ethernet also behave very differently when it comes to network latency. Ethernet switches typically employ store-and-forward and MAC table lookup addressing as Layer 2 technologies in the network transport model. The processing flow of Ethernet switches is longer than that of InfiniBand switches, because complex services such as IP, MPLS, and QinQ must be considered.

Layer 2 processing, on the other hand, is very straightforward for InfiniBand switches. The 16-bit LID is the only one that can be used to search for the forwarding path information. In parallel, the Cut-Through technology is utilized to significantly reduce the forwarding delay to less than 100 ns, which is significantly faster than the Ethernet switch.

Network Reliability

Since packet loss and retransmission have a significant impact on the overall performance of high-performance computing, a highly reliable network protocol is needed to ensure the lossless characteristics of the network at the mechanism level and realize its high reliability features. With its own defined layer 1 to layer 4 formats, InfiniBand is a complete network protocol. End-to-end flow control is the basis for the InfiniBand network's packet sending and receiving, which can result in a lossless network.

Compared with InfiniBand, Ethernet network does not have a scheduling-based flow control mechanism. As a result, it is impossible to guarantee whether the peer end will be congested when sending packets. To be able to absorb the sudden increase of instantaneous traffic in the network, it is necessary to open up a cache space of tens of MB in the switch for temporarily storing these messages, which occupy the chip resources. This means that the chip area of an Ethernet switch with the same specification is significantly larger than that of an InfiniBand chip, which not only costs more but also consumes more power.

Networking Methods

In terms of networking mode, the InfiniBand network is simpler to manage than the Ethernet network. The idea of SDN is built into InfiniBand by design. A subnet manager will be present on each InfiniBand layer 2 network to configure the ID (LocalID) of the network's nodes, uniformly calculate the forwarding path information through the control plane, and issue it to the InfiniBand exchange. To complete the network configuration, such a layer-2 network must be configured without any configuration.

Use the Ethernet networking mode to automatically generate MAC entries, and the IP must cooperate with the ARP protocol. Additionally, each server in the network must send packets on a regular basis to guarantee that entries are updated in real time. To divide the virtual network and limit its scale, it must therefore implement the Vlan mechanism. However, because the Ethernet network itself lacks an entry learning mechanism, it will result in a loop network. To prevent loops in the network forwarding path, protocols like STP must be implemented, which increases the complexity of network configuration.

InfiniBand Networking

Application Scenarios

InfiniBand is widely used in HPC environments due to its high bandwidth, low latency, and optimized support for parallel computing. It is designed to handle the demanding communication requirements of HPC clusters, where large-scale data processing and frequent inter-node communication are crucial. Ethernet, on the other hand, is commonly used in enterprise networking, internet access and home networking, and its main advantages are low cost, standardization, and wide support.

In recent developments, the demand for large-scale computing capabilities has surged, driving the need for high-speed communication within machines and low-latency, high-bandwidth communication between machines in super-large-scale clusters. According to user statistics from the top 500 supercomputing centers, IB networks play a crucial role in the top 10 and top 100 centers.

Finding Your InfiniBand Products

Judging from the comparison between InfiniBand and Ethernet above, the advantages of InfiniBand networks are very prominent. The rapid iteration of the InfiniBand network, from SDR 10Gbps, DDR 20Gbps, QDR 40Gps, FDR56Gbps, EDR 100Gbps to today's 800Gbps InfiniBand, all benefit from RDMA technology.

FS launched many InfiniBand products, including InfiniBand transceivers & DAC/AOC cables, InfiniBand adapters, and InfiniBand switches. Let's have a look one by one.

InfiniBand Transceivers & DAC/AOC Cables

FS offers a rich set of 40G-200G InfiniBand transceivers and cables to help boost highly efficient interconnection of computing and storage infrastructure.

Product Type Product Application Connector
InfiniBand Transceiver 40G Transceiver InfiniBand FDR10 MTP/MPO-12
100G Transceiver InfiniBand EDR Duplex LC
200G Transceiver InfiniBand HDR MTP/MPO-12
400G Transceiver InfiniBand NDR MTP/MPO-12 APC
800G Transceiver InfiniBand NDR Dual MTP/MPO-12 APC
InfiniBand DAC Cable 40G DAC Cable InfiniBand FDR10 QSFP+ to QSFP+
56G DAC Cable InfiniBand FDR QSFP+ to QSFP+
100G DAC Cable InfiniBand EDR QSFP28 to QSFP28
200G DAC Cable InfiniBand HDR QSFP56 to QSFP56; QSFP56 to 2 QSFP56
400G DAC Cable InfiniBand HDR OSFP to 2x QSFP56
800G DAC Cable InfiniBand NDR OSFP to OSFP; OSFP to 2× OSFP; OSFP to 4× OSFP
InfiniBand AOC Cable 40G AOC Cable InfiniBand FDR10 QSFP+ to QSFP+
56G AOC Cable InfiniBand FDR QSFP+ to QSFP+
100G AOC Cable InfiniBand EDR QSFP28 to QSFP28
200G AOC Cable InfiniBand HDR QSFP56 to QSFP56; QSFP56 to 2x QSFP56; 2x QSFP56 to 2x QSFP56
400G AOC Cable InfiniBand HDR OSFP to 2× QSFP56

InfiniBand Adapters

FS InfiniBand Adapters enable the highest performance and most flexible solution aimed at meeting the continually growing demands of data center applications. In addition to all the existing innovative features of past versions, ConnectX-6 and ConnectX-7 cards offer a number of enhancements to further improve performance and scalability.

Products Speed Host Interface Ports
MCX653105A-ECAT-SP HDR and 100Gb/s PCIe 4.0x16 Single-Port
MCX653106A-HDAT-SP HDR and 200Gb/s PCIe 4.0x16 Dual-Port
MCX653106A-ECAT-SP HDR and 100Gb/s PCIe 4.0x16 Dual-Port
MCX653105A-HDAT-SP HDR and 200Gb/s PCIe 4.0x16 Single-Port
MCX75510AAS-NEAT NDR and 400Gb/s
PCIe 5.0x16
Single-Port

InfiniBand Switches

InfiniBand switches, NVIDIA Quantum/Quantum-2 brings high-speed interconnect for up to 200Gb/s、400Gb/s, extremely low-latency and scalable solution that accelerates research, innovation, and product development for scientific researchers.

Products MQM8700-HS2F MQM8790-HS2F MQM9700-NS2F MQM9790-NS2F
Port Type 40 x HDR QSFP56 40 x HDR QSFP56 64 x NDR 400G 64 x NDR 400G
Function Managed switch Unmanaged switch Managed switch Unmanaged switch
Software MLNX-OS MLNX-OS MLNX-OS MLNX-OS
AC Power Supplies 1+1 Hot-swappable 1+1 Hot-swappable 1+1 Hot-swappable 1+1 Hot-swappable
Fan Number N+1 Hot-swappable N+1 Hot-swappable 6+1 Hot-swappable 6+1 Hot-swappable
Airflow Back-to-Front Back-to-Front Back-to-Front(P2C) Back-to-Front(P2C)

Conclusion

There are suitable application scenarios between InfiniBand and Ethernet. The CPU does not sacrifice more resources for network processing because of the significant increase in the rate brought by the InfiniBand network, which improves network utilization. This is one of the main reasons why the InfiniBand network will become the main network solution for the high-performance computing industry. 1600Gbps GDR and 3200Gbps LDR InfiniBand products will also appear in the future. If there is no high requirement for communication delay between data center nodes, and flexible access and expansion are more important, then Ethernet networks can be selected for a long time.

InfiniBand Roadmap

You might be interested in

Knowledge
Knowledge
Knowledge
See profile for Sheldon.
Sheldon
Decoding OLT, ONU, ONT, and ODN in PON Network
Mar 14, 2023
450.4k
Knowledge
See profile for Irving.
Irving
What's the Difference? Hub vs Switch vs Router
Dec 17, 2021
388.7k
Knowledge
See profile for Sheldon.
Sheldon
What Is SFP Port of Gigabit Switch?
Jan 6, 2023
387.9k
Knowledge
See profile for Migelle.
Migelle
PoE vs PoE+ vs PoE++ Switch: How to Choose?
May 30, 2024
475.4k
Knowledge
Knowledge
Knowledge
Knowledge
See profile for Moris.
Moris
How Much Do You Know About Power Cord Types?
Sep 29, 2021
342.6k