English

Why Is InfiniBand Network So Important in HPC Data Centers?

Updated on Feb 20, 2023 by
5.6k

InfiniBand Banner

HPC data centers are increasingly using the InfiniBand network. The demand for interconnections with high bandwidth and low latency is also expanding to a wider market due to the rapid expansion of high data throughput applications like data analysis and machine learning. Compared to Ethernet, InfiniBand, a network technology designed for high-speed interconnection, has emerged as a rising star in numerous high-performance computing facilities worldwide.

''Also Check- InfiniBand Network Trend

InfiniBand Tutorial: What Is InfiniBand Network?

InfiniBand is an open standard network interconnection technology with high bandwidth, low delay and high reliability. This technology is widely used in the field of supercomputer cluster. The Infiniband network is used for data interconnect both among and within data switches as well as an interconnect between storage systems.

The InfiniBand system consists of channel adapters (CA), switches, routers, cables, and connectors. CAs are divided into Host Channel Adapters (HCA) and Target Channel Adapters (TCA). In principle, IBA switches are similar to other standard network switches, but they must meet the high-performance and low cost requirements of InfiniBand. InfiniBand routers are used to segment large networks into smaller subnets and connect them with routers. HCA is a device point that connects IB end nodes, such as servers or storage devices, to the IB network. TCA is a special form of channel adapter that is often used in embedded environments, such as storage devices.

Discovering InfiniBand Network in HPC Data Centers

InfiniBand is a high-performance input/output (I/O) architecture that allows multiple cable-switching technologies to be connected simultaneously. According to the port speed, the InfiniBand network has a variety of encapsulation switches, such as QDR InfiniBand, FDR InfiniBand, NDR InfiniBand and XDR InfiniBand switches, etc., to fully realize the business support for HPC data centers.

In mainframe computing, the InfiniBand model dedicated channels are used to connect and transfer data between the mainframe and peripherals. With a maximum packet size of 4K throughout, InfiniBand enables point-to-point and bidirectional serial links that can be aggregated into units of 4X and 12X to achieve a combined useful data throughput rate of up to 300 gigabits per second.

InfiniBand Network

To further promote the development of the HPC industry, many brand owners have begun to introduce InfiniBand products. FS provides 200G data center switches, especially 200G InfiniBand switches, which can be used as one of the options for HPC data centers.

How Does InfiniBand Network Meet HPC Data Center?

Today, the Internet is a massive infrastructure that needs a variety of data-intensive applications. Large businesses are also constructing HPC data centers to meet the demand for dependable computing of an amount of data. A system environment is created by HPC's extremely high bandwidth between computing nodes, storage, and analysis systems. Another performance indicator for HPC architecture is latency. The HPC data center has decided to use the InfiniBand network because it can satisfy its requirements for high bandwidth and low latency.

Bandwidth & Latency

Compared with RoCE and TCP/IP, the adoption of InfiniBand network in high-throughput processing environments, especially those using server virtualization, blade, and cloud computing technologies, has shown outstanding advantages.

''Also Check- RoCE vs Infiniband vs TCP/IP

The Advantages of InfiniBand Network in HPC Data Centers

Whether it is the evolution of data communication technology, the innovation of Internet technology, or the upgrade of visual presentation, they all benefit from more powerful computing, larger capacity and safer storage, and more efficient networks. The InfiniBand network can provide higher-bandwidth network services while also reducing latency, lowering the consumption of computing resources by network transmission load, and perfectly integrating HPC data centers thanks to its unique advantages.

Network efficiency: The InfiniBand network (IB network) offloads protocol processing and data movement from the CPU to the interconnection, maximizing the efficiency of the CPU and performing ultra-high-resolution simulations, large data sets, and highly parallelized algorithms. InfiniBand's transmission rate has reached 168Gbps (12xFDR), far exceeding the 10Gbps of 10 Gigabit Ethernet and the 100Gbps of 10 Gigabit Ethernet. Especially in HPC data centers, significant performance improvements will be achieved, such as Web 2.0, cloud computing, big data, financial services, virtualized data centers, and storage applications. This will shorten the amount of time it takes to complete the task and cut down on the overall cost of the process.

Higher bandwidth: In the period when Ethernet 100G dominated the market, IB network speed was also developing continuously, and 100G/200G InfiniBand switches were launched one after another to meet the high-performance requirements of HPC architecture. InfiniBand switches are an excellent option for HPC data centers due to their high bandwidth, high speed, and low latency performance, which enables them to achieve high server efficiency and application productivity.

Scalability: Network Layer 2 can be created with 48,000 nodes in a single subnet thanks to InfiniBand. Moreover, the IB network does not depend on broadcast mechanisms like ARP, so it will not cause broadcast storms or waste additional bandwidth. Different IB subnets can likewise be associated with switches and switches.

FS offers NVIDIA Quantum™-2 NDR InfiniBand 400G and NVIDIA Quantum™ HDR InfiniBand 200G data center switches, available in both managed and unmanaged configurations. To meet diverse customer needs, the 400G switches come with service support options of one year, three years, and five years. Surely, if you have multiple devices running to reduce the workload, you can also use Ethernet networks to complete data transmission. FS also provides multiple speed Ethernet switches to help your network construction.

Conclusion

With its extreme performance, the InfiniBand network and its innovative technical architecture can help HPC data center users maximize business performance. Infiniband technology greatly simplifies the high-performance network architecture, reduces the delay caused by multi-level architecture layers, and provides strong support for the smooth upgrade of the access bandwidth of key computing nodes. It will be the general trend for InfiniBand networks to enter more and more usage scenarios.

You might be interested in

Knowledge
Knowledge
Knowledge
See profile for Sheldon.
Sheldon
Decoding OLT, ONU, ONT, and ODN in PON Network
Mar 14, 2023
441.8k
Knowledge
See profile for Irving.
Irving
What's the Difference? Hub vs Switch vs Router
Dec 17, 2021
385.7k
Knowledge
See profile for Sheldon.
Sheldon
What Is SFP Port of Gigabit Switch?
Jan 6, 2023
379.8k
Knowledge
See profile for Migelle.
Migelle
PoE vs PoE+ vs PoE++ Switch: How to Choose?
May 30, 2024
466.0k
Knowledge
Knowledge
Knowledge
Knowledge
See profile for Moris.
Moris
How Much Do You Know About Power Cord Types?
Sep 29, 2021
336.1k