An Overview of the Ten Advantages of InfiniBand Network
InfiniBand (IB), known as "Infinite Bandwidth" technology, is a high-performance computing network communication standard renowned for its exceptional throughput and minimal latency. Dominating the latest Top500 supercomputer rankings, InfiniBand networks have surged in quantity and performance, solidifying their position as the preferred interconnect technology. This article will summarize the ten key advantages of the InfiniBand network and introduce the InfiniBand products that FS can provide when upgrading to the InfiniBand network.
Ten Benefits of InfiniBand Network
Why are InfiniBand networks so highly valued in the Top500? Its performance benefits play a decisive role. Read below, maybe you can find the answer.
Simplify Network Management
InfiniBand is the first network architecture developed specifically for SDN and managed by a subnet manager. Subnet managers configure and ensure continuous operation of local subnets. All channel adapters and switches must implement a Subnet Management Agent (SMA) to handle communication with the subnet manager. Each subnet requires at least one subnet manager for initialization and reconfiguration when links connect or disconnect. A subnet manager is selected as the primary via arbitration, with others operating in standby mode (each standby manager backs up the subnet's topology information and verifies its operational status). In case of primary manager failure, a standby manager takes over to ensure uninterrupted operation.
CPU Offload
CPU offloading is a significant method for speeding computing. The InfiniBand network architecture allows data transmission with minimal CPU resources. The implementation is as follows:
-
Hardware offloads the entire transport layer protocol stack.
-
Bypass kernel, zero copy.
-
RDMA allows data from one server's memory to be written directly into another server's memory without CPU involvement.
High Bandwidth
Since the birth of InfiniBand, the development of InfiniBand network speed has been faster than that of Ethernet for a long time. The main reason is that InfiniBand is used to interconnect servers in high-performance computing, which requires higher bandwidth. Find more details about InfiniBand Network Bandwidth Evolution.
In addition, FS can provide InfiniBand modules, DACs, AOCs, network cards and switches at different speeds. Please see the table below for details. If you want to upgrade to an InfiniBand network, they would be a good choice of network equipment, or you can Contact Us directly and our solution experts will provide corresponding solutions based on your specific needs.
Speed
|
Transceivers
|
DAC/AOC
|
Network adapter
|
Switch
|
---|---|---|---|---|
100G EDR InfiniBand
|
/
|
|||
200G HDR InfiniBand
|
||||
400G NDR InfiniBand
|
/
|
|||
800G NDR InfiniBand
|
/
|
Low Latency
Ethernet switches generally use MAC table lookup addressing and store-and-forward methods, which require a long processing process. InfiniBand switch layer 2 processing is very simple. You only need to check the forwarding path information based on the 16-bit LID. At the same time, it uses Cut-Through technology to greatly shorten the forwarding delay to less than 100ns, which is much faster than Ethernet switches. At the network card level, the use of RDMA technology significantly accelerates the latency of packet encapsulation and decapsulation.
Scalability and Flexibility
One major advantage of IB networks is that a single subnet can deploy up to 48,000 nodes, forming a massive two-tier network. Additionally, IB networks do not rely on ARP or other broadcast mechanisms, thus avoiding broadcast storms or additional bandwidth wastage. Multiple IB subnets can also be interconnected via routers and switches. IB supports multiple network topologies:
For smaller scales, it is recommended to use a 2-layer fat-tree topology. For larger scales, a 3-layer fat-tree network topology can be employed. Beyond a certain scale, adopting a Dragonfly+ topology can save some costs.
Qos
QoS is the ability to provide different priority services for different applications, users or data flows. InfiniBand uses virtual lanes (VL, Virtual Lanes) to implement QoS. Virtual channels are separate logical communication links that share a physical link. Each physical link can support up to 15 standard virtual channels and a management channel (VL15).
Network Stability and Resiliency
Although the InfiniBand network is very stable, some failures will inevitably occur during long-term operation. NVIDIA provides a mechanism called Self-Healing Networking, which is based on IB switches and can reduce link failure recovery time to 1 millisecond.
Load Balancing
Load balancing is one of the methods used in high-performance data centers to enhance network utilization, functioning as a routing strategy that distributes traffic across multiple available ports. Adaptive Routing exhibits such characteristics, enabling it to mitigate network congestion and maximize network bandwidth utilization.
Network Computing (SHARP)
IB switches support the SHARP-Scalable Hierarchical Aggregation and Reduction Protocol, which is a centrally managed software package that offloads collective communication, traditionally running on CPUs and GPUs, to the switch. This optimizes collective communication, eliminates multiple data transfers between nodes, and enhances accelerated computing performance.
Diverse Network Topologies
InfiniBand networks offer support for diverse network topologies, including Fat Tree, Torus, Dragonfly+, Hypercube, and HyperX. These different topologies cater to varying needs, such as easy network scaling, reduced TCO, minimized latency, and extended transmission distances.
Conclusion
In summary, the evolution of InfiniBand technology has brought about revolutionary changes in data centers and high-performance computing environments. From its impressive bandwidth and low latency to its scalability and cost-effectiveness, InfiniBand exhibits clear advantages in various aspects. Opting for InfiniBand is a wise choice for those seeking superior performance, as well as a prudent consideration for economic efficiency and future growth. As technology continues to mature and evolve, InfiniBand is poised to maintain its leadership in the data transfer industry. If you have any questions about FS InfiniBand devices, you can always contact us for assistance.
You might be interested in
Email Address
-
PoE vs PoE+ vs PoE++ Switch: How to Choose?
May 30, 2024