English

InfiniBand vs. RoCE: How to choose a network for AI data center?

Posted on Dec 21, 2023 by
2.2k

In recent years, AI has made significant strides, powering a wide array of applications such as natural language processing, computer vision, autonomous vehicles, virtual assistants, recommendation systems, and medical diagnostics. As AI applications evolve, data centers face escalating demands for low-latency, high-bandwidth networks that can efficiently handle complex workloads.

Introduction to InfiniBand Networks

InfiniBand network facilitates data transfer through InfiniBand adapters or switches. The key components include the Subnet Manager (SM), InfiniBand network cards, InfiniBand switches, and InfiniBand cables.

NVIDIA is a major manufacturer that supports a range of InfiniBand network cards, including the rapidly advancing 200Gbps HDR and commercially deployed 400Gbps NDR cards.The following figure shows the commonly used InfiniBand network cards.

Infiniband

Infiniband

InfiniBand switches don't run any routing protocols, and the entire network's forwarding table is calculated and distributed by the centralized Subnet Manager. In addition to the forwarding table, the Subnet Manager is responsible for configuring aspects within the InfiniBand subnet, such as partitioning and Quality of Service (QoS). To establish connections between switches and between switches and network cards, InfiniBand networks require dedicated cables and optical modules.

FS becomes NVIDA Elite Partner since 2022. We can provide a complete set of original products based on Infiniband's lossless network solutions. FS's Infiniband switches can handle 16Tb/s Aggregate Switch Throughput, Sub-130ns Switch Latency. FS's Infiniband adapters support NDR, NDR200, HDR, HDR100, EDR, FDR, and SDR InfiniBand speeds. And FS'S IB transceivers can meet connectivity requirements ranging from 0.5m to 2km distances, and they offer free technical support. With superior customer service and products that reduce costs and complexity while delivering exceptional performance to server clusters, FS is your go-to choice.

Features of InfiniBand Network Solutions

Native Lossless Network

InfiniBand networks employ a credit-based signaling mechanism to inherently prevent buffer overflow and packet loss. Prior to initiating packet transmission, the sending end ensures that the receiving end possesses sufficient credits to accommodate the corresponding packet quantity. Each link in the InfiniBand network is equipped with a predefined buffer. Data transmission from the sending end is constrained by the available buffer size at the receiving end. Upon completion of forwarding, the receiving end releases the buffer, consistently updating the current available buffer size and transmitting it back to the sending end. This link-level flow control mechanism guarantees that the sending end never overwhelms the network with excessive data, effectively averting buffer overflow and packet loss.

Infiniband

Network Card Expansion Capability

InfiniBand's Adaptive Routing relies on per-packet dynamic routing, ensuring optimal network utilization in extensive deployments. Notable instances of large GPU clusters in InfiniBand networks include those in Baidu AI Cloud and Microsoft Azure.

Presently, several major providers offer InfiniBand network solutions and associated equipment in the market. NVIDIA dominates this sector with a market share exceeding 70%. Other significant suppliers include:

  • Intel Corporation: Providing a range of InfiniBand network products and solutions.

  • Cisco Systems: A well-known network equipment manufacturer offering InfiniBand switches and related products.

  • Hewlett Packard Enterprise (HPE): A prominent IT company delivering various InfiniBand network solutions, including adapters, switches, and servers.

These providers furnish products and solutions tailored to diverse user requirements, supporting InfiniBand network deployments across various scales and application scenarios.

Introduction to RoCE v2 Networks

While an InfiniBand network relies on a centrally managed system with a Subnet Manager (SM), a RoCE v2 network operates as a fully distributed network, comprising RoCEv2-capable NICs and switches, typically organized in a two-tier architecture.

Infiniband

Various manufacturers offer RoCE-enabled network cards, with key vendors including NVIDIA, Intel, and Broadcom. PCIe cards serve as the predominant form of data center server network cards. RDMA cards generally feature a port PHY speed starting at 50Gbps, and currently available commercial network cards can achieve single-port speeds of up to 400Gbps.

Infiniband

Most data center switches currently support RDMA flow control technology, which, when paired with RoCE-enabled network cards, facilitates end-to-end RDMA communication. Leading global data center switch vendors, such as Cisco, Hewlett Packard Enterprise (HPE), and Arista, offer high-performance and reliable data center switch solutions to meet the demands of large-scale data centers. These companies possess extensive expertise in networking technology, performance optimization, and scalability, earning widespread recognition and adoption worldwide.

The heart of high-performance switches lies in the forwarding chips they employ. In the current market, Broadcom's Tomahawk series chips are widely utilized as commercial forwarding chips. Among them, the Tomahawk3 series chips are prevalent in current switches, with a gradual increase in switches supporting the newer Tomahawk4 series chips.

Infiniband

RoCE v2 operates over Ethernet, allowing for the use of both traditional Ethernet optical fibers and optical modules.

ROCE v2 Network Solution Features

In comparison to InfiniBand, RoCE presents the advantages of increased versatility and relatively lower costs. It not only serves to construct high-performance RDMA networks but also finds utility in traditional Ethernet networks. However, configuring parameters such as Headroom, PFC (Priority-based Flow Control), and ECN (Explicit Congestion Notification) on switches can pose complexity. In extensive deployments, especially those featuring numerous network cards, the overall throughput performance of RoCE networks may exhibit a slight decrease compared to InfiniBand networks.

Various switch vendors provide support for RoCE, and presently, NVIDIA's ConnectX series of network cards commands a substantial market share in terms of RoCE compatibility.

InfiniBand vs. RoCE v2

From a technical standpoint, InfiniBand incorporates various technologies to enhance network forwarding performance, reduce fault recovery time, improve scalability, and simplify operational complexity.

Infiniband

In practical business scenarios, RoCEv2 serves as a good solution, while InfiniBand stands out as an excellent solution.

Concerning business performance: InfiniBand holds an advantage in application-level business performance due to its lower end-to-end latency compared to RoCEv2. However, RoCEv2's performance is also capable of meeting the business performance requirements of the majority of intelligent computing scenarios.

Infiniband

Concerning business scale: InfiniBand can support GPU clusters with tens of thousands of cards while maintaining overall performance without degradation. It has a significant number of commercial use cases in the industry. RoCEv2 networks can support clusters with thousands of cards without significant degradation in overall network performance.

Concerning business operations and maintenance: InfiniBand demonstrates more maturity than RoCEv2, offering features such as multi-tenancy isolation and operational diagnostic capabilities.

Concerning business costs: InfiniBand carries a higher cost than RoCEv2, primarily due to the elevated cost of InfiniBand switches compared to Ethernet switches.

Concerning business suppliers: NVIDIA stands as the primary supplier for InfiniBand, while there are multiple suppliers for RoCEv2.

Conclusion

In summary, when it comes to the intricate network technology selection process for intelligent computing centers, InfiniBand emerges as the preferred solution, offering substantial advantages to the computing environment.

InfiniBand consistently showcases outstanding performance and reliability, particularly in high-performance computing environments. Through the adoption of InfiniBand, intelligent computing centers can unlock high-bandwidth, low-latency data transmission capabilities, fostering more efficient computation and data processing. This, in turn, translates into the delivery of exceptional services and user experiences. Looking ahead, intelligent computing centers are poised to continue their exploration and adoption of advanced network technologies, consistently elevating computing capabilities and propelling scientific research and innovation forward.

You might be interested in

Knowledge
Knowledge
Knowledge
See profile for Sheldon.
Sheldon
Decoding OLT, ONU, ONT, and ODN in PON Network
Mar 14, 2023
386.2k
Knowledge
See profile for Irving.
Irving
What's the Difference? Hub vs Switch vs Router
Dec 17, 2021
367.6k
Knowledge
See profile for Sheldon.
Sheldon
What Is SFP Port of Gigabit Switch?
Jan 6, 2023
335.6k
Knowledge
See profile for Migelle.
Migelle
PoE vs PoE+ vs PoE++ Switch: How to Choose?
Mar 16, 2023
420.6k
Knowledge
Knowledge
Knowledge
Knowledge
See profile for Moris.
Moris
How Much Do You Know About Power Cord Types?
Sep 29, 2021
294.7k